227 Comments

I think this article is pushing back against a strawman position ("LLMs are going to destroy the world") that basically no one in the AI Safety/Alignment community holds. What is true is that the recent increase in people being worried about bad AI outcomes was triggered by the unexpected achievements of LLMs. But here the LLMs just represent a milestone on the way to more powerful AIs and possibly a trigger for more investment in the field.

Expand full comment
author

But my whole argument here is that LLMs are qualitatively different from AGI. Thinking of them as a "step on the road" assumes that they're simply not as far along an intelligence line. But I don't think that's how this works.

Expand full comment
Mar 8, 2023Liked by Noah Smith

You should write a post about whether AGI will destroy the world then. Yud isn’t afraid of ChatGPT.

Expand full comment
author

AGI is pretty science fictional at this point, which is why I don't think AI risk folks really get any solid conclusions despite pouring tons of brain power and time into the subject, and despite coming out with a bunch of interesting theories.

We just don't really know what AGI would be like.

However, I do have some thoughts on AGI alignment, and how it relates to rabbits, that I will write up at some point.

Expand full comment

"Oh sure we solved language over the last two years after being deadlocked for decades, models have ballooned to the size of small mammal brains, Google has training language models that can pilot robots in the real world, and they can write code with a level of competence that rises sharply every couple of months, but it's not like we're developing AGI or anything like those fucking nerds are worried about, those idiots. Those *fools*."

Expand full comment

Refusing to extrapolate even a couple of years down the road isn't sober and thoughtful and responsible, it's *stupid.* It causes you to get the *wrong answers to important questions.*

Expand full comment

> AGI is pretty science fictional at this point

Did you read Deepmind's "A Generalist Agent" paper

https://www.deepmind.com/publications/a-generalist-agent

> Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy.

> The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.

> While no agent can be expected to excel in all imaginable control tasks, especially those far outside of its training distribution, we here test the hypothesis that training an agent which is generally capable on a large number of tasks is possible; and that this general agent can be adapted with little extra data to succeed at an even larger number of tasks.

> We hypothesize that such an agent can be obtained through scaling data, compute and model parameters, continually broadening the training distribution while maintaining performance, towards covering any task, behavior and embodiment of interest. In this setting, natural language can act as a common grounding across otherwise incompatible embodiments, unlocking combinatorial

generalization to new behaviors.

I mean, that really sounds like non-vague description of an AGI. It's a single neural net that perceives, has a language model / communicates with humans, and steers a robot. And it's a fairly old news already. And scale is low.

Expand full comment

It's still Gc (crystallized intelligence). It's not able to take on novel situations that are not part of its training system.

Expand full comment

LLMs already do succeed on tasks far outside their training system though -- translation, rhyming, chess, etc. etc. etc.

Expand full comment

It sounds like a smarter chatbot.

Expand full comment

I think that's exactly why people are so worried. We don't have solid conclusions about the risk AGI will pose which could indicate its substantial (>10% chance), and in comparison to other global catastrophic risks like climate change, there really isn't tons of money and brain power being devoted to it. As far as I know, a lot of AI-risk people just think there should be a lot more attention given to it at the margin.

Expand full comment

The biggest issue with LLMs is their misuse by malign human intelligences.

Expand full comment

With the money going to them, of course.

Expand full comment

Or to international governance programs, or to education, or to lobbying and advocacy for policies *restricting* the profitability and scope and development speed of their industry... not sure if things are as simple as your are insinuating!

Expand full comment

This, amongst other things, discusses the relation between AGI and rabbits (the argument of the book, indeed is that mathematical limits on what Turing machines can compute, imply that AI algorithms will never reach anything like the intelligence of a rabbit. https://www.routledge.com/9781032309934

Expand full comment
author

The scenario he gave about the AI bank robber bioterrorists was pretty much just a chatbot with a couple of augmented capabilities...

Expand full comment
Mar 8, 2023·edited Mar 8, 2023Liked by Noah Smith

Among other things, it's a chatbot that can also figure out what strings of DNA you'd have to put together to make a super pathogen. This is a hard problem.

Incidentally, "a chatbot with some augmented capabilities" also describes a human...

Expand full comment

It's not just a hard problem. It would be pretty much impossible for any intelligence, no matter how smart, to figure out how to create a world destroying super pathogen from an armchair if it only had access to existing experimental data. An AI would have to perform a ton of suspicion raising novel experiments to get the data it would need to make progress.

Expand full comment

I'm not so sure there isn't enough data - after all, the smallpox virus genome is known, along with lots of organic toxins and other things that kill people - but "all" you need to design nanotech "grey goo" that eats the world is the Schrodinger equation, a whole lot of computing power, and probably some better approximation methods than mathematicians currently know.

See also:

https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message

At the limit of hypothetical future AI and computer technology, we're talking about something that can do as much "thinking" in a few minutes as an entire human civilization can do in thousands of years. AlphaGo Zero played 4.9 million games of Go in three days - how many games of Go does a human master play in a lifetime?

Expand full comment

One thing that makes me feel better about AGI is all the boosters/doomers drastically overestimate our ability and knowledge of everything else.

Expand full comment

Similarly I'm not worried about human robber bioterrorists.

Expand full comment

Still crystallized intelligence (Gc). And a human also has the ability to reason through novel situations that are not part of their training. LLMs can't. They use brute force calculation.

Expand full comment

Deep learning neural nets such the ones that make up LLMs and AlphaGo and it's successors are Turing complete (I think), we don't really understand the algorithms they're implementing once they've been trained, and we don't really know how much brute force calculation the human brain uses unconsciously to do what it does. So I'm not that impressed by this particular argument.

Expand full comment
Comment deleted
Expand full comment

somebody hasnt taken the panpsychism pill

Expand full comment

It depends on what you mean by “consciousness”. If you mean “sensory awareness of one’s surroundings”, then that is definitely a game changer, but it can be achieved by attaching a goal-directed neural network to one of the Boston Dynamics robots. If instead you mean the thing that Dave Chalmers thinks is the subject of a “hard problem”, that doesn’t actually seem particularly relevant to anything.

Expand full comment

Well, we can't measure consciousness from the outside (yet); do dogs have consciousness?

(I was exaggerating for rhetorical effect.)

Expand full comment

Only in the sense that it empowers humans with malign intent.

Expand full comment

Who will develop the AGI of the future? The people developing LLMs today. That industry will keep pushing, pushing, pushing until it finally develops something that it can't control. If we were going to stop with LLMs your points would be valid. But we're not going to stop.

Expand full comment
author

Maybe. Or maybe LLMs are a blind alley and will divert money away from the direction that would actually succeed in creating AGI. That's what Yann LeCun things, IIRC.

Expand full comment

Ok, that could be. If AI was going to be limited to LLM, then I'd be willing to bail on doomer mode.

Expand full comment

I don't really get that. A hypothetical AGI will need to be able to read and write, abilities that LLMs more or less provide. So something like an LLM clearly seems "on the path," but it's not a linear, sequential path.

You'll still need the general intelligence part (i.e., learning to do something it wasn't trained to do, like designing killer pathogens) and agency. Those both seem like big steps.

Having said that, LLMs are driving a wave of investment in AI work, including multi-modal learning, action models, and who knows what else. I'm updating my wild ass guess on AGI from 2070 to 2050.

Expand full comment

LeCun's takes aren't taken seriously by those working at DeepMind. He has tweeted before that alignment is as easy as adding instructions such as "don't run people over"

Expand full comment

Eh, that's what Gary Marcus thinks, and I give more credence to Marcus than LeCun. Marcus has training in human cognition as well as programming. LLMs are one aspect.

Expand full comment

I agree that it is not clear whether the LLMs themselves are going to be an instrumental part of later approaches to AGI (i.e. in how far they help the development of such an AGI). If it was easy to come up with promising approaches in this direction, people would already be following them (and maybe they are).

But what LLMs did do recently was to show that

a) progress in AI remains unpredictable (i.e. compare recent twitter screenshots from "A brief history of artificial intelligence" listing tasks such as "writing coherent stories" as being nowhere near to be solved); this is meta-evidence that surprising advances in capabilities are possible, in particular

b) a lot of arguments of the form "AGI is nowhere near, AIs cannot even do X" were invalidated (for various X), leading some people to re-evaluate their previous assessments on this question

c) there are good avenues to making money out of the current generation of generative AIs (e.g. programmers saying they would pay up to 1000$/month for access to ChatGPT), which did indeed trigger large investments from Microsoft in OpenAI.

Regarding b), it is not unlikely that some of the limitations mentioned in your article (e.g. "ability to do physics") could be solved in the near future. Would this lead to new lines being drawn ("ability to do research level physics") or is there anything so surprising that it would update the probability of doom?

Expand full comment
author

Of these, only (c) is a reason to expect any sort of acceleration toward AGI. As for physics, we have software that can do that already. One question is whether and how it could be integrated with LLMs.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Physics simulators have already been integrated with LLMs, resulting in large improvements in physics question answering. https://arxiv.org/abs/2210.05359

This will happen in every domain. Language models will be able to do anything that can be done from a computer. https://arxiv.org/abs/2302.04761 https://www.adept.ai

Draw the straight line. Where does it lead us? https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to

Expand full comment
author

Straight line through what space?

Expand full comment

A couple of trends. Over time, AI can accomplish more of the tasks that people said would be impossible. More tasks are performed by a single generalist system, rather than many narrow ones. As AI becomes more capable and reliable, we allow it to take more autonomous action in the world -- driving cars, trading stocks, writing emails -- because these applications grow more profitable.

What does the world look like in 2050? Who is making the most important strategy decisions for businesses and governments? Who is executing those plans, and reporting on their success? Plenty of these functions might still be in human hands. But the more we hand off to autonomous AI systems, the greater the risks from misalignment.

Expand full comment

I can't tell what timeline you are concerned with here.

If you are looking at next 10-50 years, probably find to focus on variations of powerful narrow AI.

But think 100-500 years. Some things we can expect on they time frame: LLMs that are smarter that humans. Some integration with the physical world.

Expand full comment

I agree that a Frankenstein-solution with a physics-engine strapped to a LLM would not be interesting progress. But I think it would be surprising evidence to see a big unified system like an LLM start answering physics questions correctly.

And examples of such unexpected progress happened in the past: no one trained LLMs explicitly to do translation (in fact I think it was attempted to exclude non-English text from the training material). So there was a time where you could imagine that we would need to integrate existing translator programs into an LLM if we wanted it to have this capability. Yet it turned out that with more parameters, LLMs can actually do decent translation, out of the box.

Expand full comment

you write "Maybe if we stitch together enough of those AIs subroutines, and come up with a really good algorithm for telling when to use each one, then we’ll have an AGI."

this seems plausible to me, since getting experience in a broad array of domains seems to be how humans become general intelligences. and given that language is one of the most important tools for learning humans, its also pretty plausible to me that they'd be really useful as an "ingredient" in a stitched-together AI like that

if it's true that you can make an AGI out of LLMs plus other models -- which themselves might be based on technical insights gained from training LLMs -- im not sure how that can be compatible with claiming that LLMs arent a step towards AGI

Expand full comment

Nope. LLMs possess crystallized intelligence. But they don't have the ability to reason and spot errors in their logic.

Expand full comment

thats not really contradicting my point

Expand full comment

Well, as 1) computer performance doubles every 24 months, 2) the performance of these AI tools is generally scalable 1:1 with computer power, and that 3) these models themselves become better with equal power, we can anticipate that :

- In 10 years these tools will probably be 50 to 100 times better (and computers 32 times more powerful)

- In 20 years, they will be 2,000 to 3,000 times better (computers 1,024x more powerful)

- In 30 years, 50 000 to 100 000 times better (32 768x more powerful computers)

Now, just imagine Chat GPT being 50x better. Can you ? Millions of people will already think it is conscious at this level, whether it is true or not.

Now imagine Chat GPT being 50,000x better.

Expand full comment

Can you really extrapolate 30 years on an exponential?

Expand full comment

I guess a reasonable change to that question would be "Can you really extrapolate *another* 30 years on *that* exponential?"

Expand full comment
founding

Computer performance doesn't double every 24 months anymore. Unless we make advances in quantum computing, this doubling will end very soon.

Expand full comment

Quantum computers aren't generally faster than classical computers. They're specialized computers that are theoretically faster on specific problems. And human consciousness probably isn't one of those.

Expand full comment

LLMs are but one aspect of general intelligence. But until they can develop the skill to deal with novel situations and recognize errors in their processes--they aren't AGI.

Expand full comment

Nah. LLMs possess a portion of AGI, that is crystallized intelligence (Gc). But AGI needs to develop fluid reasoning (Gf) in order to replicate human intelligence (HI). And we're nowhere near that yet.

Expand full comment

Some argue that scaling is all you need to get AGI. This would be more interesting to argue about

Expand full comment

This is the essential point.

Expand full comment

Yes, agreed, well and concisely said.

The easiest way to understand the future of AI is to examine the history of nuclear weapons. Nukes were invented with the best of intentions, but they turned out to be a far bigger problem than the problem they were intended to solve. And now nobody has a clue how to get rid of them.

LLMs aren't going to destroy the world, agreed. But the people developing LLMs today are the Robert Oppenheimers of the 21st century, along with Jennifer Doudna and the genetic engineering "experts". They're opening a pandora's box that they will have no idea how to close if things don't turn out well with their creations.

The evidence that we're not ready for AI is abundant. We mass produced civilization ending weapons, then couldn't figure out how to get rid of them, so we've decided to largely ignore nukes, and turn our attention to creating more huge threats. This is teenage level thinking.

Expand full comment

One can argue that MAD makes nukes a solution, not a problem. Not sure I'd argue that, but...

Expand full comment

I’m old enough to remember when the world transforming technology that was also an existential threat was nanotechnology which should definitely be here right now.

Expand full comment

I'm sort of confused as to why you'd write so many words arguing against a position that few people are taking. LLMs are concerning to people worried about AI safety because they've illustrated a) how rapid progress is being made in the area of Artificial Intelligence b) race dynamics are causing large actors to make risky moves in pursuit of rapidly expanding capabilities and c) how difficult it is to control the behavior of an artificial agent. I don't think any serious AI safety thinker is worried about the current generation of AI tools.

Expand full comment

It's more the misuse of these tools by humans with malign or mercenary intent that is the issue.

Expand full comment

So don't worry about AGI because we haven't invented AGI yet? Are we only supposed to worry about it after we've invented it?

LLMs can't end humanity, no. But no one is arguing that. The speed at which AI models (LLMs, stable diffusion etc.) are improving is why people are worrying about what would happen when we do create an intelligence sophisticated enough to be a danger. To say that we're not near AGI is not a reason not to worry. It's just kicking the can down the road.

Expand full comment
author

But is there any reason to think LLMs are a movement in the direction of AGI?

Expand full comment

Language is a powerful tool for getting-things-done in our universe (communicating complex information, storing knowledge, writing code in a programming language that can be executed). Because of its versatility and the pre-existing interfaces to the human world (via search engines, AI to convert text <-> image, etc) there is a path to powerful AI agents that are basically different specialized systems held together with language serving as an interfaces between them. For instance:

Camera takes picture -> gets converted to text description ("robot arm gripping a glass, dishwasher in the background") -> LLM is asked to describe a strategy for reaching a certain goal ("what would a nice, helpful robot do to help cleaning the dishes: move arm left/right, take step forward/back?") -> robot actuators execute the plan

And even for tasks such as deduction and long-term planning that are not primarily about generation and manipulation of text, language-based techniques can still be helpful (e.g.

proving a mathematical statement by letting an LLM generate many computer-verifiable proofs, check if one of them is correct).

Now the examples above might sound silly/cumbersome, or like they would at most lead to agents with very bounded abilities. But the same might have applied a few years ago to claims like "being able to predict the next character in a string will lead to AIs building and deploying websites from natural language descriptions". Overall I think "AGI by 2050" looks more likely in the universe that we are seeing, than in the counterfactual universe in which ChatGPT was never published.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Yes. Two days ago Google released the PALM-E paper showing that you can get a somewhat generalist robot if you provide it with a multimodal LLM: https://palm-e.github.io/

This follows a paper from Microsoft last month that got an LLM to write control code for a real drone: https://www.microsoft.com/en-us/research/group/autonomous-systems-group-robotics/articles/chatgpt-for-robotics/

As well as a series of results last year from Google for a robot called SayCan, where a LLM gave a robot the ability to follow high-level commands: https://say-can.github.io/

None of these things are agents in and of themselves, they mostly just follow prompts/orders, but it’s not hard to imagine how they could become autonomous with a little more work.

Expand full comment

OpenAI believes so. They may be wrong and they’re talking their book but still…

Expand full comment

From a specific angle, it doesn't matter if it's LLM or some other deep learning model or something completely different. The only thing which matters is capabilities. LLMs are surprisingly capable and general. So in this sense, latest LLM advancements are steps towards AGI.

It might be deceiving dead-end (or not) but there is no way to know in advance.

Expand full comment

One could conceptualise AGI as being a bundle of models knit together so that as one entity it can do many (all) different tasks better than a human. E.g. maths, chess, protein folding, language, Go, soldering etc.

A language model that's better than any human at writing is surely part of that bundle of models that we might call AGI one day, no?

Expand full comment

No.

Expand full comment

I don't see a reason to disagree with that. It's at least useful to generate communication with, just like speech synthesis generates audio.

Expand full comment

They're a portion of it. But they can't correct their own errors yet. When that happens, yes, that will be a movement.

Expand full comment

The burden of proof is on the people who say it can be invented.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023Liked by Noah Smith

Presenting a direct descendant of an LLM, that can operate a robot, follow directions, and do a bunch of other things that chatbot LLMs can't do:

https://palm-e.github.io/

Expand full comment

"The most prominent of these voices is Eliezer Yudkowsky, who has been saying disturbingly doomer-ish things lately"

*lately*??? Yudkowsky has been saying doomer-ish things for the past fifty billion years!

Well, OK, I binged him (and that doesn't mean what your filthy mind is thinking!!!) and he was born in 1979, so only the past 40 years or so.

But it's really good to finally read a non-insane take on AGI. We are NOWHERE CLOSE to AGI, and everything we know about NGI (humans) tells us that nothing being done now will get there. We humans don't even understand why essentially all vertebrates need to sleep. Will AGIs sleep? If not, what is the mechanism (that millions of years of evolution did not find) that eliminates the need for sleep? No one has any fucking clue!!! No one in AI even thinks about these things!

*sigh*

Expand full comment

We don't need to know why human brains work to make AGI much like we didn't need to know how horse bodies worked to make cars. While understanding biological systems that have properties we are interested in can provide inspiration and intuition for building artificial systems, that understanding is not necessary.

Expand full comment

We had a THEORY about how cars would work: get the wheels to turn through torque appled by a motor. There is no comparable theory of AGI.

Expand full comment
Mar 10, 2023·edited Mar 10, 2023

If i understand properly, your argument is that in order to build anything, you need a strong mechanistic/theoretical understanding of what it will look like in order to build it. While this is the case for a lot of fields, such as physical engineering, it is often not the case for optimizing systems. For example, nature was able to create HGI, or human general intelligence, simply through natural selection-based optimization, and there isn't any sense in which nature had a mechanistic understanding of what the human brain would look like.

To be more specific to AI and ML technologies - there are many examples of AI research teams building capable systems in different areas without really understanding those systems. Two examples include that DeepMind created a super-human Go AI without super-human Go skill, and researchers have made super-human object recognizers without super-human object recognition. The biggest problem with powerful AI systems is that they have sophisticated capabilities without us understanding how they got there or how they work. And just as people over the last 10 years have been able to make increasingly powerful AI without a strong understanding of how they work, I expect people to be able to do the same over the next 10 years.

If by theory of AGI you mean a more broad, hierarchical theory of what sorts of components are needed to make an advanced autonomous super-human agent, there are many people out there who have such theories and are working on them. As an example, Yann Lecun released this paper last year: https://openreview.net/pdf?id=BZ5a1r-kVsf I don't know whether it is actually true, but many people are trying out different architectures to get AGI, and some of them might come across the right one.

Expand full comment

AlphaGo/AlphaZero actually aren't "AIs" in the sense of being a single big neural network. They're part neural network and part a classical search algorithm. So a lot of that was invented by humans and we do know how that works.

Expand full comment

imagine thinking like this in a world where penicillin, mathematics, and natural selection exist.

Expand full comment

I remember becoming a certified mechanic. It involved a lot more stalking prey on Chincoteague than I would have guessed, though.

My cheeky way of saying "What, exactly, are you talking about?" Understanding how a horse works has 0 to do with how a powertrain works, or how to apply one to locomotion.

Good point, but bad choice of metaphor. Metaphorgotten, if you will.

Expand full comment

Like LLMs, cars don't do anything unless you turn them on.

And if you turn it on, it'll eventually run out of gas or blow a tire and stop again. Similarly, even if you put an AGI in a loop where it can drive itself… it's still not going to pay its own AWS bill and will get turned off.

Expand full comment

It can generate infinite money with deepfakes on OnlyFans to pay its bills

Expand full comment
Mar 9, 2023·edited Mar 9, 2023

About Yud -- this is not true. 2005 Yudkowsky was optimistic and 2015 Yudkowsky was somewhat concerned but not a doomer.

About timelines -- we don't really know. 10 years ago it you could say we are nowhere near things like Stable Diffusion and Bing Chat.

About sleep -- AGI will not sleep, it's a thing biological organisms do, it is not relevant for things made out of silicon.

Expand full comment

You can't say that sleep is not needed for GI without a theory of GI, which you don't have.

Expand full comment

I disagree. I think I can claim for example that AGI is not going to require red blood cells despite all currently existing GIs require it.

Expand full comment

We know why we sleep - that was discovered a few years ago. It’s to clean metabolic waste out of the brain and process memories for long term storage.

Expand full comment

I don’t think that’s a proper explanation of why we sleep. At least, it doesn’t obviously correspond to the types of problems that occur if you have a lack of sleep.

Expand full comment

Lack of sleep, which eventually causes death, doesn’t correspond with a buildup of waste products in the brain?

Expand full comment

Does anyone have a story by which "buildup of waste products in the brain" produces fatigue if sustained for a day or two, and hallucination and poor judgment if sustained longer, and then eventually death? Or why these waste products require a state like *sleep* to clear them out? Or why there are four different phases of sleep, even though most of these things you mention seem to take place during REM?

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

There are two parts memory processing and cleaning.

I’m also unclear what you’re asking in terms of metabolic waste. How would high levels of waste produce fatigue?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3880190/

Expand full comment

> How would high levels of waste produce fatigue?

That's exactly the question I was asking you! We know that one of the central functions of sleep is somehow to reduce fatigue, and you said that there was a good explanation of that in terms of eliminating metabolic waste.

I think this paper you've cited shows that eliminating metabolic waste is one thing that goes on during sleep, but its conclusion is weaker than what you were suggesting:

"it is possible that sleep subserves the important function of clearing multiple potentially toxic CNS waste products. ... The purpose of sleep has been the subject of numerous theories since the time of the ancient Greek philosophers (34). An extension of the findings reported here is that the restorative function of sleep may be due to the switching of the brain into a functional state that facilitates the clearance of degradation products of neural activity that accumulate during wakefulness."

There's a lot of "it is possible that" and "may be" here.

Expand full comment
Mar 8, 2023Liked by Noah Smith

"First, like humans, sci-fi AIs are typically autonomous — they just sit around thinking all the time, which allows them to decide when to take spontaneous action. LLMs, in contrast, only fire up their processors when a human tells them to say stuff. To make an LLM autonomous, you’d have to leave it running all the time, which would be pretty expensive. But even if you did that, it’s not clear what the LLM would do except just keep producing reams and reams of text."

I really like this argument. The counterargument is that autonomous AI systems will be more useful than AI you have to babysit. AI homework helper becomes AI tutor becomes AI teacher. AI medical chatbot becomes AI pharmacist and AI physician. AI coding assistant becomes AI research engineer and starts writing papers of its own advancing the field of AI. This has already happened in finance, where algorithmic trading cannot be overseen by humans in real time, and so we give the algorithms autonomy with occasionally disastrous consequences (e.g. Flash Crash).

But in the short term, it seems like AI deployment will be bottlenecked by human oversight. Maybe this will continue for decades: humans know more about human values, and have different strengths and weaknesses that can complement AI systems. In this world, the Baumol effect goes crazy, as capital automates an increasing share of tasks yet labor remains a key bottleneck. There would be massive profits from more complete automation that takes humans out of the loop and allows growth to feed back into itself, driving more growth. Whether the labor bottleneck is strong enough to prevent singularity seems like one of the most important questions in AI forecasting.

Here's a macro model simulating how full automation results in a sudden singularity: https://takeoffspeeds.com/playground.html

On the other hand, here's how Baumol could prevent the singularity: https://web.stanford.edu/~chadj/AJJ-AIandGrowth.pdf

Section 6.2 has a wonderful 4 page overview of the key theoretical models and outstanding questions: https://www.nber.org/system/files/working_papers/w29126/w29126.pdf

Also, a more CS, less Econ argument against fully autonomous AI: https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case#A__Contra__superhuman_AI_systems_will_be__goal_directed__

And a very CS argument predicting more autonomy in AI: https://gwern.net/tool-ai

Appreciate your thoughtful consideration of many topics, hoping you find some of this stuff worth your time to think about more.

Expand full comment
author

I'm familiar with a couple of these papers. Agrawal et al. go through a bunch of models of how AI could affect various aspects of economics; I would not describe this review as being primarily about "how Baumol could prevent the singularity", but yes, sure, if you always need humans to do certain tasks, that will put a limit on growth. And yes, I know Chad Jones' papers on semi-endogenous growth; AI researchers substituting for human ones will be essential to keeping growth going if ideas produce other ideas with diminishing returns!

Expand full comment

Yep that's fair. Ideas getting harder to find seems like a very important dynamic here. Moore's Law has ended in important ways, though new techniques could keep it going for a while. More important recently has been algorithmic progress in AI that reduces the amount of compute necessary to train a model to a certain level of performance.

I really wonder whether AI contributions will be able to exponentially improve compute efficiency, or meaningfully support the continuation of Moore's Law. If they're able to, we'll be swimming in more compute than used by any biological brain within a few decades, and the odds of AGI seem much higher. But if Moore's Law meaningfully ends, this whole AGI thing could be caput.

Expand full comment

To me, the whole subject of AI which is so popular right now suffers from an excessive interest in details, which is obscuring the bottom line. The whole reason AI is being developed is that we want more power. But we can't handle the power we already have. It's the simplest thing, and all this expert posturing is getting in the way.

Imagine you're the parent of a teenager. Your teenager wants a car. But they keep crashing the moped you bought them. And so you would say, prove to me you can handle the moped, and then we'll talk about the car.

So kids, come up with credible solutions to nukes and climate change, and then we can talk about AI. Until then, forget it.

Expand full comment
Mar 8, 2023Liked by Noah Smith

Good book on the ways nuclear weapons systems are prone to cyberattack: https://www.amazon.com/Hacking-Bomb-Threats-Nuclear-Weapons/dp/1626165645

Three mechanisms:

1. Direct detonation is probably not possible via cyberattack. Command and control is a secret, but in all likelihood includes human operators of airgapped systems. So the simplest story is likely bunk.

2. False attacks that simulate a nuclear first strike are likely possible. Israel's Operation Orchard took over Syria's air defense systems, feeding them false information to mask the ongoing attack. While it's never been done, it could be possible to hack into missile defense systems and fake an attack in order to provoke a nuclear response strike. This is the better story for AGI doomers.

3. Spread of information via hacking would be a systemic risk factor in nuclear war. Better hacking could expose secrets such as missile locations, launch codes, and response plans that threaten MAD. This isn't nearly as direct a threat as the previous two, and is less directly caused by AI.

Expand full comment
author

I really do need to read that!!

Expand full comment
Mar 8, 2023Liked by Noah Smith

I think a lot of the rationalist community has really lost the ability for self-reflection on these things, in a way that makes me wonder if I was mistaken to ever read them at all. Their argument basically boils down to 'AGI would have such a serious potential consequence that any argument against it, no matter how implausible or impossible to evaluate for plausibility, is valid.'

Imagine if you took this approach to life. Is there a sniper waiting to shoot me if I open my window blinds? It's not technically impossible! And the consequences are dire if it's true - I would die! Your arguments that no one is trying to kill me, and if they were they'd do it differently, and also that there's no evidence at all for it, pale in comparison to the risk of my total annihilation! You'd be immobilised. In some ways I risk my life every day, and humanity does things that could imperil it in the future, but that's just life in a world of unknowns.

The doomers need to draw some kind of reasonable line from where we are to a real AGI, the danger it poses, and what the evaluations of plausibility are. Otherwise, it's just people indulging their favourite pastime of philosophising about AGI. Which is fine, have fun, but it doesn't matter to the rest of us.

Expand full comment

I agree that if one thinks that AGI has a vanishingly small probability to come in the next 100 years (like the probability of the sniper you mention) then it makes no sense to take radical actions, even if the potential implications are huge. I think Eliezer has explicitly stated at some point that if that probability was less than 5%, he would agree that there are more pressing problems.

But many people in the rationality community think that the probability of AGI by 2120 is > 10%. If one assumes this for a second, then it absolutely makes sense to invest serious thought and resources into the questions what this would mean and how to make sure it goes well for us.

As for the factual question how likely AGI is in the next 100 years, some arguments in favor of a higher probability are:

- The current Metaculus-prediction for AGI is 2040, with a general downward trend since the questions was asked first: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/

- Recent advances in AI have shown that for many obstacles that were considered fundamentally difficult (very unreliable answers in math questions) there is one weird trick that allows significant progress (adding "let's think in steps" to the prompt). This should make us more sceptial in the future of any arguments against AGI relying on the inability of current AIs to perform certain tasks.

- For scientific advances X, it is very often difficult to predict a "reasonable line from where we are to X", because if there was an obvious such line, then people would already have tried it out (barring lack of resources etc). What we can observe is that we are currently pouring lots of money, talent and ressources into advancing towards X = AGI, so if there is such a line of action, this is making it more likely that we will find it.

Expand full comment

100% correct. Even worse for the doomers is that even if AGI could accomplish this list (except for asteroids) each would not end humanity. They could put a dent in civilizations, but actually eliminating humanity is so hard that even if we made it our job, we couldn't do it. Our will to survive generally exceeds the will to destroy.

Expand full comment

Kind of a genius post. Admiration if you’re right, and if you’re wrong...

Expand full comment

This piece was disappointing in its ignorance of what Yudkowsky and others are actually worried about. If you don't understand their argument, maybe that makes them bad communicators, but it doesn't make you a good refuter, either. Sorry.

Expand full comment
author

Can you be more specific?

Expand full comment
Mar 8, 2023Liked by Noah Smith

Happy to, but later this week. I need to do a little thinking how to say it precisely and still keep it nicely readable.

Expand full comment
author

Thanks! I realize that their chief concern is AGI, and I might eventually have a couple things to say about that. But I've encountered enough people who are freaked out by the recent progress in LLMs (and by Evil Bing) that I thought it would be fun to write this post to calm them down. Also I've been thinking about what makes LLMs not an AGI, and I hadn't really talked about the autonomy piece yet, or what I think autonomy really means. So this was an opportunity to say that.

I hope Yud & co. go on worrying about AGI...someone has to! Not sure many definite results will come out of that, though, since AGI is so inherently science fictional. Vernor Vinge invented the idea of the Singularity precisely to represent the impossibility of predicting anything about the age of AGI in advance.

Expand full comment

And while dooming is hard to eliminate, even if they are right there is no way to stop it from happening. If money continues to flow, development will continue, and China will keep working on it even if corporations or other governments stop.

Expand full comment

I suggest looking at Gary Marcus. I have more technical references from psychometricians, but their argument can be summed up in very similar terms as Marcus's--we don't have artificial intelligence capable of novel reasoning (much less agency). The greatest danger is the malign use of LLMs by those of mercenary or bad intent.

Expand full comment

Since Yud is actually running a Berkeley AI-themed religious cult and not a research organization, "refuting" details is not the way to go and you should actually just declare that it's silly and then ignore him.

(Saying they must have a point because they have a lot of details is the Courtier's Reply fallacy.)

A story about an AGI taking over the world or turning it all into paperclips is impossible simply because it would need all of its plans to go well enough it never breaks down, which can't happen in real life because of entropy. In real life it would have to get a real job to pay for its power bill and maintenance, just like humans have to get one to eat and so don't have time to take over the world.

Expand full comment

Merely human-grade genius doesn't let you rule the world. Even CEOs, let alone Presidents, are selected for a lot different traits than raw brainpower.

If we create a species as much smarter than us as we are than dogs, I'm pretty sure they do take the world if they want.

But I'm guessing you side with the people who reckon it's a long, long time before we create anything that much smarter than us.

Expand full comment

AGI can easily make infinite money in the stock market, on OnlyFans, or a multitude of other ways.

Expand full comment

You cannot easily make infinite money in a zero-sum game like stock trading.

Expand full comment

Good Take. My sense is that LLMs will be useful in three domains: 1) adding the conversational element to boring speech (customer support). Think of it as the windows for language 2) productivity in software engineering 3) kindling for creative pursuits

None of these seem to be close to ending the world.

Expand full comment
Mar 8, 2023Liked by Noah Smith

How much would your post change if you read about Toolformer (https://arxiv.org/abs/2302.04761, the abstract alone is enough to give you an idea) and then imagine that some of those tools can generate an execute code? Of course you can argue that's not just an LLM but I would say it's a small enough step away that it should have been talked about in this piece

Expand full comment
author

Toolformer is a step toward the kind of multi-functional AI I was talking about, where LLMs are augmented with a bunch of purpose-built models for various functions.

Expand full comment

That; then, is the mechanism by which LLMs get us closer to AGI

Expand full comment
Mar 8, 2023·edited Mar 8, 2023Liked by Noah Smith

Makes sense.

See also Pirate Wires/Solana on same theme a couple weeks ago.

Expand full comment
Mar 8, 2023Liked by Noah Smith

I worry more that there will be someone who releases MAGA World Fox like chat bots to keep the ignorant and hateful part of our society in constant outrage or worse. It's working with Fox - can you imagine if it's magnified.

Expand full comment

I was hoping some form of AI could parse through the thousands of video hours from Jan 6th, and sort out whether it was insurrection or Fed-generated false flag.

Expand full comment

Don’t limit yourself to ignorant right wingers; ignorant degrowth environmental alarmists or Luddite tankies could just as easily be goaded into expanding their dangerous ideas.

Expand full comment