61 Comments

Long story short: The current round of AI is, intellectually, on the level of parlor tricks. There's no intellectual, theoretical, or scientific there there. Really, there isn't. (This is easy for me to say: I have an all-but-thesis in AI from Roger Schank.)

AI, originally, was defined to be a branch of psychology in which computation (the mathematical abstraction) would be used to analyze and theorize about various human capabilities, and computers (real devices) would be used to verify that analisys/resultant theories. But AI grew a monster other half whose goal was to make computers do interesting things. With no intellectual basis or theoretical/academic concerns whatsoever.

It is this other half of the field that has taken over.

Slightly longer story: It turns out that we humans really do think; we actually do logical reasoning about things in the real world quite well. But we (1980s AI types) couldn't figure out how to persuade computers to do this logical reasoning. Simple logic was simply inadequate, and it turns out that people really are seriously amazing. (Animals are far more limited than most people think: there is no animal other than humans, that understands that sex causes pregnancy and pregnancy leads to childbirth. Animals can "understand" that the tiny thing in front of them is cute and needs care, but that realizing that it has a father is not within their intellectual abilities.)

So the field gave up on even trying, and reverted to statistical computation with no underlying models. The dream is that if you can find the statistical relationship, you don't need to understand what's going on. This leaves the whole game at essentially a parlor trick level: when the user thinks that the "object recognition" system has recognized a starfish, the neural net has actually recognized the common textures that occurred in all the starfish images.

So here's the "parlor trick" bit: the magician doesn't actually do the things he claims to do, neither do the AI programs. (The 1970s and 1980s programs from Minsky an Schank and other labs actually tried to do the things they claimed to do.) And just as getting better and better at making silver dollars appear in strange places doesn't lead to an ability to actually make silver dollars, better and better statistics really isn't going to lead to understanding how intelligent beings build quite good and effective models of the world in their heads and reason using those models.

So while AI isn't incomprehensible (it's a collection of parlor tricks), it is an intellectual horror show.

Expand full comment

Yeah I am more on your side of this. I find "AI" as uninteresting as the "hoverboards" that are just electronic skateboards and "virtual reality" which is stereoscopic video: taking cool sci fi branding and applying it to dumb Silicon Valley PR.

Expand full comment

there's also the issue of how human beings wrestle with telos, while AI is a technological reaction (albeit pretty clever, at times).

Expand full comment

Can you share a specific example of a task you believe modern deep learning techniques to be fundamentally incapable of accomplishing?

Arguing that an image recognition network is only recognizing common textures of starfish images rather than really recognizing starfish seems like a pointless distinction to me. So it would be nice to get something more specific.

If you can make your claim specific, I'm probably happy to make a bet.

Expand full comment

You ask for a "specific example of a task you believe modern deep learning techniques to be fundamentally incapable of accomplishing?" My answer would be that I can't think of any task that deep learning is capable of accomplishing.

(Well, yes. It's great for Go programs. But Go is a joke of an application, since Go has the exact same geometry as the neural net, and this "application" is for the specific types of pattern matching used in what were already very strong Go programs. More specifically, we already knew that MCTS could be used to find good moves in Go (this itself was an enormous intellectual achievement; anyone who had written a chess, othello or the like program was astounded that this worked) and we knew the types/purposes of patterns that worked. Offloading that pattern matching to graphics hardware resulted in superhuman strength, and given that, "learning" the pattern databases from self-play was the obvious thing to do.)

Read the book "Rebooting AI" for a longer discussion of why deep learning is the wrong thing for AI applications that lack isomorphisms to regular rectangular arrays.

What's already known to be a problem is that when an "object recognition" system fails, it's hard to figure out why. One of the early ones was really good at recognizing cows. Until they showed it a picture of cow on a beach. Why? There were no green pixels. An x-ray image reading program was really good at predicting patient outcomes from chest x-rays. It turned out that patients too sick to be moved to the hospital's main X-ray machine were imaged with a portable unit. And the AI had learned to differentiate not the patients but the machines. (It failed when tried at other hospitals.)

More generally, recognizing and reasoning about objects is important for functioning in the real world. The deep learning systems don't get us any closer to doing that. Since they neither recognize objects nor reason about them. (Again, see Rebooting AI.)

I don't bet, but there's a current joke to the effect that there are hundreds of AI startups running on venture capital money trying to read radiological images, yet no radiologist has been replaced, even though this game has been going on for quite a while now. (AI has a bad record in medicine. We think there's lots of money to be made, get all hot and bothered, and fail miserably. The "expert systems" back in my day, and now IBM's Watson has failed.)

But the intellectual, philosophical, and moral issue here is that if you've got a technology that does Y, but you want it to do X, you need to do more than pray.

Expand full comment

Also, https://www.gwern.net/Scaling-hypothesis#critiquing-the-critics

> The accelerating pace of the last 10 years should wake anyone from their dogmatic slumber and make them sit upright. And there are 28 years left in Moravec’s forecast…

> The temptation, that many do not resist so much as revel in, is to give in to a déformation professionnelle and dismiss any model as “just” this or that(“just billions of IF statements” or “just a bunch of multiplications” or “just millions of memorized web pages”), missing the forest for the trees, as Moravec commented of chess engines:

> > The event was notable for many reasons, but one especially is of interest here. Several times during both matches, Kasparov reported signs of mind in the machine. At times in the second tournament, he worried there might be humans behind the scenes, feeding Deep Blue strategic insights!…In all other chess computers, he reports a mechanical predictability stemming from their undiscriminating but limited lookahead, and absence of long-term strategy. In Deep Blue, to his consternation, he saw instead an “alien intelligence.”

> > …Deep Blue’s creators know its quantitative superiority over other chess machines intimately, but lack the chess understanding to share Kasparov’s deep appreciation of the difference in the quality of its play. I think this dichotomy will show up increasingly in coming years. Engineers who know the mechanism of advanced robots most intimately will be the last to admit they have real minds. From the inside, robots will indisputably be machines, acting according to mechanical principles, however elaborately layered. Only on the outside, where they can be appreciated as a whole, will the impression of intelligence emerge. A human brain, too, does not exhibit the intelligence under a neurobiologist’s microscope that it does participating in a lively conversation.

> But of course, if we ever succeed in AI, or in reductionism in general, it must be by reducing Y to ‘just X’. Showing that some task requiring intelligence can be solved by a well-defined algorithm with no ‘intelligence’ is precisely what success must look like! (Otherwise, the question has been thoroughly begged & the problem has only been pushed elsewhere; computer chips are made of transistors, not especially tiny homunculi.)

-----

> (If you are still certain that there is near-zero probability of AGI in the next few decades, why? Did you predict—in writing—capabilities like GPT-3? Is this how you expect AI failure to look in the decades beforehand? What specific task, what specific number, would convince you otherwise? How would the world look different than it does now if these crude prototype insect-brain-sized DL systems were not on a path to success?)

> What should we think about the experts? Projections of failure were made by eminent, respectable, serious people. They spoke in considered tones of why AI hype was excessive and might trigger an “AI winter”, and the fundamental flaws of fashionable approaches and why brute force could not work. These statements were made routinely in 2014, 2015, 2016… And they were wrong.

> There is, however, a certain tone of voice the bien pensant all speak in, whose sound is the same whether right or wrong; a tone shared with many statements in January to March of this year; a tone we can also find in a 1940 Scientific American article authoritatively titled, “Don’t Worry—It Can’t Happen”⁠, which advised the reader to not be concerned about it any longer “and get sleep”. (‘It’ was the atomic bomb, about which certain scientists had stopped talking, raising public concerns; not only could it happen, the British bomb project had already begun, and 5 years later it did happen.)

> This tone of voice is the voice of authority⁠.

> The voice of authority insists on calm, and people not “panicking” (the chief of sins).

> The voice of authority assures you that it won’t happen (because it can’t happen).

> The voice utters simple arguments about why the status quo will prevail, and considers only how the wild new idea could fail (and not all the possible options).

> The voice is not, and does not deal in, uncertainty; things will either happen or they will not, and since it will not happen, there is no need to take any precautions (and you should not worry because it can’t happen).

> The voice does not believe in drawing lines on graphs (it is rank numerology).

> The voice does not issue any numerical predictions (which could be falsified).

> The voice will not share its source code (for complicated reasons which cannot be explained to the laity).

> The voice is opposed to unethical things like randomized experiments on volunteers (but will overlook the insult).

> The voice does not have a model of the future (because a model implies it does not already know the future).

> The voice is concerned about its public image (and unkind gossip about it by other speakers of the voice).

> The voice is always sober, respectable, and credentialed (the voice would be pleased to write an op-ed for your national magazine and/or newspaper).

> The voice speaks, and is not spoken to (you cannot ask the voice what objective fact would change its mind).

> The voice never changes its mind (until it does).

> The voice is never surprised by events in the world (only disappointed).

> The voice advises you to go back to sleep (right now).

> When someone speaks about future possibilities, what is the tone of their voice?

Expand full comment
Oct 1, 2022·edited Oct 1, 2022

https://www.gwern.net/fiction/Clippy

> We should pause to note that a Clippy still doesn’t really think or plan. It’s not really conscious. It is just an unfathomably vast pile of numbers produced by mindless optimization starting from a small seed program that could be written on a few pages.

> It has no qualia, no intentionality, no true self-awareness, no grounding in a rich multimodal real-world process of cognitive development yielding detailed representations and powerful causal models of reality; it cannot ‘want’ anything beyond maximizing a mechanical reward score, which does not come close to capturing the rich flexibility of human desires, or historical Eurocentric contingency of such conceptualizations, which are, at root, problematically Cartesian.

> When it ‘plans’, it would be more accurate to say it fake-plans; when it ‘learns’, it fake-learns; when it ‘thinks’, it is just interpolating between memorized data points in a high-dimensional space, and any interpretation of such fake-thoughts as real thoughts is highly misleading; when it takes ‘actions’, they are fake-actions optimizing a fake-learned fake-world, and are not real actions, any more than the people in a simulated rainstorm really get wet, rather than fake-wet.

> *(The deaths, however, are real.)*

Hint: brains do stats too. You seem to want a procedural-algorithm AI. That is pretty much impossible; at least if you want reasonable performance. Artificial neural nets don't work exactly like actual brains; architecture is (obviously) similar though.

Expand full comment

This sort of common sense is wasted on Noah and his grubby lot. Like quantum computing (another "technology" that shows no signs of actually existing) there is a cottage industry of wanking around "AI." 20 years ago he'd be burbling about nanotech (the nincompoop who thought nanotech up is also an AI wanker these days). Let him gabble on about his fear of overgrown linear regression models; it will be amusing to troll him with his words in 10 years.

Expand full comment

At this point I am willing to give the AI aliens a chance.

Expand full comment
Sep 22, 2022Liked by Noah Smith

Well, I started this article generally a optimist about artificial intelligence and now you have me looking under the bed for monsters.

Expand full comment

"Predictable results from an inexplicable mechanism" is actually the best definition of magic I have ever heard.

Expand full comment

C.S. Lewis put it very well (from "Letters to Malcolm"):

"When I say 'magic' I am not thinking of the paltry and pathetic techniques by which fools attempt and quacks pretend to control nature. I mean rather what is suggested by fairy-tale sentences like 'This is a magic flower, and if you carry it the seven gates will open to you of their own accord,' or 'This is a magic cave and those who enter it will renew their youth.' I should define magic in this sense as 'objective efficacy which cannot be further analysed'"

Expand full comment

Thanks for the quote!

Expand full comment

Magic is just fundamentally irreducible complexity. As the entry on 'Magic' on lesswrong wiki says:

> Traditional depictions of magic would seem to require introducing complex ontologically fundamental entities: some magician or sorceress says the right words and performs some ritual, and some part of the universe obeys their will. But how does it know when to obey someone's will? The stated conditions for the effect are far too complex to be implemented by a simple arrangement of mechanistic laws, the complexity of magic must be at least that of minds. What seems to humans like a simple explanation, sometimes isn't at all.

> In our own naturalistic, reductionist universe, there is always a simpler explanation. Any complicated thing that happens, happens because there is some physical mechanism behind it, even if you don't know the mechanism yourself (which is most of the time). There is no magic.

I love quasi-fantasy stories which turn out to actually be sci-fi - something does implement magic!

Like Ra: https://qntm.org/ra

Also, very short and not _precisely_ about magic... https://www.fanfiction.net/s/5389450/1/The_Finale_of_the_Ultimate_Meta_Mega_Crossover

> "The universe has to bottom out somewhere!" Ravna had sworn she wasn't going to get involved in this argument again, and yet she couldn't seem to help herself. "You're going to come to a stop someday - and that place happens to be here, dammit! There is a perfectly reasonable explanation for how you got here - or maybe 'reasonable' is too strong a word, but it's a perfectly logical explanation. And that explanation says that this is it. You've reached the end of the line."

> "Been there," chorused Jake Stonebender and around half the others, "done that."

> "No! Old One simulated you being there and doing that! It simulated your experiences - it might even have simulated your whole world for all I know - and that's how all those apparently impossible things could happen to you! Old One simulated your base worlds, Old One invented the higher universes and higher metaverses you discovered, Old One crossed them over! And then it finally synthesized you outside the simulation - out here, in the real world! Don't you understand?" Ravna stopped, because it was clear from the looks on their faces that they did understand.

> "Look, Ravna," Harold Shea said gently. "I understand your perspective. Don't get me wrong. The first time I heard that my whole life had been a computer simulation - well, it was pretty scary. I'd been through enough worlds, at that point, to know that whenever it started to look like something magical actually had a reductionist explanation, the reductionist explanation was usually right. I thought that probably had been the truth all along - the real explanation for how I got from one world to another. I mean, it did seem pretty absurd if I stepped back and thought about it." Shea shrugged. "That's what I thought the first time."

> Aaaagh! "Don't you understand that Old One can just simulate that too?"

> Shea nodded. "Yes, that's what I thought the second time. It did indeed occur to me, the second time through, that the Solid State Entity could have just as easily synthesized my memory of the Five Galaxies and the Transcendents. But there was this certain nagging doubt, you understand." Shea sighed. "By the third time, it was just one more way of going from one place to another."

> Ravna's hands made helpless gestures, as if trying to clutch air. "But we agree on all the real facts of the universe up until this point? You agree that all your memories were simply synthesized, or, at best, experienced within Old One's simulation?"

> "Or within whatever simulated Old One," Shea said agreeably. "Look at it from my perspective, Ravna. What are the odds that this particular reality was the bottom one?"

> Ravna buried her face in her hands. "We agree on all the facts of the universe up to this point. We agree on the reasons why you believe what you believe. Shouldn't we be able to agree on what we predict will happen next?"

> "We're always dreaming," said a middle-aged woman who carried herself with a queenly air and a quite peculiar demeanor, "and no matter how many times we wake up, we can wake up another time after that. It's a race whose end can never be reached. And will I be glad to wake up from this one! Ship time, ship time, just shipping shipping shipping from one end of the galaxy to the other! I haven't been so bored in years! Since I followed the rabbit!"

Expand full comment

But, as Dr. Who once noted, "Any magic sufficiently advanced is indistinguishable from technology." That's actually more profound than it seems. I suppose you can have a magic which just consists of arbitrary spells that perform different things, but that makes discovering new spells extremely dangerous and unlikely to succeed. Try a variant on a known spell and it could be a dud, do something weird or summon a deadly demon. More common magics are grammatical and comprehensible, if only because they are fictional and as fantastic as the author may wish to make their world, they still need things to make sense to their human audience. That means that spells tend to fall into categories and operate on classes of targets so that learning magic requires understanding the magical structure of reality and the magical operators that comprise spells. Perhaps real magic is incomprehensible, but fictional magic tends not to be.

Expand full comment
Comment removed
Expand full comment

There were great ages of ideology before then, but in the mid-20th century, science and industry were actually delivering. It really did seem possible that politics harnessing science and industry could lead to a utopia of plenty.

Expand full comment

I'd argue the difference between sci-fi and magic is about whether it responds to human level concerns like emotion, concentration, desire etc.. I mean quantum mechanics or even, if you reduce it to it's most basic level, Newtonian mechanics produces predictable results for no understandable reason. I mean, at the end of the day there isn't anything 'under' the postulates of QM or the Schrodinger equation. Sure, you can point to particular properties they have but there isn't any reason that reality is well described by wave-functions that obey these symetries and not those other than it 'just does'.

What's different is that the electron and it's wave function doesn't respond to your human concerns or issues. You can't summon up a well of strength and will the plutonium core not to go critical...it does what it does based on factors that aren't relates to our beliefs, wants and desires while spells are all about responding to your will, emotions, dedication and study.

It's why sci-fi shows struggle more with satisfying narrative arcs that don't frustrate us with plot holes. In a sci-fi show the dumbest dropout can point your fancy special weapon and vaporize your highly skilled veteran (in thy) and plot holes spring up when you need to explain why that didn't happen. OTOH in fantasy it's just the nature of the universe that Elrond and Gandalf are hugely more powerful because of their age and roles.

Expand full comment

Everyone whose ever tried art knows that hands are hard.

Expand full comment
Sep 22, 2022Liked by Noah Smith

AI as incomprehensible is how Zachary Mason described it in his elegant neo-cyberpunk novel, Void Star.

Expand full comment

I have to put up a link to this piece by Eric Hoel, "We need a Butlerian Jihad against AI: A proposal to ban AI research by treating it like human-animal hybrids", which seems to resonate with your Lovecraftian horror notion:

https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against

Expand full comment

AI safety may be like drug safety.

You don't know if a new drug has fatal side effects. You just know how much testing it went through, and how many inspections the regulators did of the drug factory. Usually that's enough.

We won't know if our shiny AI black boxes have hidden Lovecraft modes. But we can know under what conditions the AI boxes were trained, and what tests they passed.

Maybe that will be enough?

Expand full comment

That's a fair analogy, but even from a naive understanding, most of these programs are rather poorly tested. The tests most commonly cited overlap, often completely, with the training set which makes them nearly useless in assessing how they would perform in the real world. Any useful testing would have to be done in real world driving, must as we test human drivers. It would also have to test a significant number of miles and vehicles. Our human driving tests are based on billions of human driving hours. We'd need something similar for AI systems doing the same task. (I'm not saying it's impossible, just likely to be expensive.)

We test drugs first in vitro using cell cultures so we can bound the overall chemical envelope. Then we test them in animals, ideally animals with similar medical complaints to those of humans, to see if they can work in a more complete system. Then we test them in humans, first for safety and to assess their overall level of activity. Then comes phase 3, which is where all too many drugs fail, where they are tested to see if they do more good than harm.

Expand full comment

"I quit this “Hitler or Lovecraft?” quiz halfway through after getting every question wrong!"

If it includes the words "gibbering," "noisome," or "putrescent," it's probably Lovecraft. :)

Expand full comment

The worst case of autonomous military drones -- in particular, drones that have been equipped to repair themselves and build new swarm members to replace fallen comrades -- is "Horizon: Zero Dawn".

Expand full comment

Seems bad

Expand full comment
Sep 22, 2022·edited Sep 22, 2022

I mean only for the first three thousand years, then you get awesome metal dinosaurs. 😹

Expand full comment

I tried prompting the language model GPT-J with

"Artificial Intelligence: a poem" by H.P. Lovecraft

I have found the best results occur with "temperature" in the 0.9-0.99 range.

Starting at 0.9, I got a free verse talking about the prehistoric origins of art; promising, but then it never made its way into more recent eras.

A second try produced an essay about "the mind of the universe" and "the intelligence of the cosmos" and how they are different things.

For a third attempt, I set temperature higher, to 0.98, figuring that this would be better for poetic composition. The result:

https://pastebin.com/3cFeGdAS

Expand full comment

That last line, though

Expand full comment

The most dangerous person one will meet is the lunatic who doesn't rave.

Have enjoyed Ellison, Clark, Lovecraft and others, thanks for reminding me of them.

Expand full comment

The way I've always explained it for "is this a task a modern AI is suited for" is this: what are the consequences for a 1 in 10 failure rate? This is supremely optimistic, but it feels nice and round. AI art generator fails? Laugh and post it to #Synthetic-Horrors. AI autonomous car fails? People die.

Needless to say, while AI will find its niche in "high failure tolerance" areas, the Singularity is not soon upon us. For my money, it'll take a paradigm shift away from purely neural network based AIs before we get anything acceptable for other cases.

Expand full comment

re: why sociopaths creep us out.

It is similar to what you write about unpredictability but as a possibly interesting tangent...the philosopher David Livingstone Smith, in his book On Inhumanity, argues that humans naturally fall into essentialist thinking where things are expected to fit into neat categories. Think how casually we say things like "My cat is bad at being a cat, he doesn't even try to catch mice". We'd cringe if we said something remotely like that about a group of humans. Or how we see things as either pets OR food but never both. But essentialist thinking seems to come very naturally to us. And things that don't fit into essentialist categories make us squirm a bit. At least, that's his argument about why dehumanisers always talk about their targets in transgressive terms. They are both people AND cockroaches. Beasts AND people. After all, we don't get that mad about a deer that eats our shrubs. We don't torture the bird that poops on our car. And sociopaths likewise don't fit into neat categories. Bad people are supposed to be obviously bad. We're supposed to be able to spot them a mile away. They're supposed to be like a Disney villain, that obviously bad.

And while I'm on vaguely related tangents to your post: The Mountain in the Sea by Ray Nayler is another great, recent entry in the canon of extrahuman intelligence (in this case cephalapod intelligence) that a lot of people reckon is a sure fire Hugo winner this year. You might enjoy it.

Expand full comment

Amazon says "The Mountain in the Sea" isn't even going to be released until October 4, so by "a lot of people," I'm guessing you mean critics who received preview copies?

Expand full comment

If by "critics" you mean anyone who has an account on NetGalley and requested an ARC, yes. It is pretty easy to get an ARC these days.

Expand full comment

It's really something how when the villain in Frozen revealed his true nature, it's a total shock...

Expand full comment

I feel like to fill out your uncanny valley / psychopathy notion of how horror operates you should look into and appreciate the concept of "abject art," which is defined by ordinary biological forms and barriers being violated, causing unease and disturbing expectations. AI doesn't get what a human sees, so outputs a correlation / approximation from what's been input, but since it doesn't know the referent image from the actual form (which humans should be very careful about assuming THEY understand the full context from any image!!!) nor has any procedure to comprehend the distinction, it merely outputs a reference of references which accidentally violate our understanding of our bodies and thus disturbs us.

It's not just our bodies but also perspective, spacetime stuff. Ordinary things become warped.

Fun follow up question: why do our dreams do that to us? Coming from a human brain, shouldn't dreams not violate our sense of self and perspective as humans?

Expand full comment

Psychology 101 textbooks used to have an image of what was called an anxiety mask. (I can't use internet search to find this because a lot of people got anxious about wearing masks during COVID's - I hope - peak.) It was a mask of a face but with a variety of components slightly off. I remember that it looked pretty creepy.

Expand full comment