94 Comments

So it sounds as if we should start to worry less about technological stagnation and more about whether progress in AI is happening too quickly to keep it safe.

I was surprised to see that when Matt Yglesias writes about AI risk, he gets very strong pushback from a bloc of his paid subscribers who think the risk doesn't exist and that it's all Luddite nonsense.

It was hard to tell how many of those people were conservatives predisposed to this view by all the propaganda about global warming being a hoax, and how many were just going a little too far with the technophile, "abundance agenda" framework.

Either way it seems clearly wrong. People don't seem to appreciate that even if there's some theoretical argument for why there can't be existential AI risk, you can accept the argument with 99 percent confidence and still think AI risk reduction is a very, very high priority. Why is there so much resistance to this idea?

Expand full comment
Dec 12, 2022Liked by Noah Smith

Strongly agree! There's even more than one kind of AI risk. Synthetic biology + AI is very risky and likely to be a problem even before the Skynet scenario is.

Imagine a ChatGPT-like model that takes natural language instructions, but instead of text outputs nucleotide sequences. It'd be hugely helpful for designing gene therapies, yes, but also novel pandemic pathogens.

There are already companies that'll take a sequence and mail you back DNA, which a competent biologist can turn into infectious viral particles. Currently if you ask for smallpox, they'll call the FBI; what happens if it's a virus no one's ever seen before?

Expand full comment

This was all predicted in 1946 by a science fiction story called "A Logic Named Joe":

*This fella punches, 'How can I get rid of my wife?' Just for the fun of it. The screen is blank for half a second. Then comes a flash. 'Service question: Is she blonde or brunette?' He hollers to us an' we come look. He punches, 'Blonde.' There's another brief pause. Then the screen says, 'Hexymetacryloaminoacetine is a constituent of green shoe polish. Take home a frozen meal including dried-pea soup. Color the soup with green shoe polish. It will appear to be green-pea soup. Hexymetacryloaminoacetine is a selective poison which is fatal to blond females but not to brunettes or males of any coloring. This fact has not been brought out by human experiment, but is a product of logics service. You cannot be convicted of murder. It is improbable that you will be suspected.'

The screen goes blank, and we stare at each other. It's bound to be right..."

Expand full comment

Given the current state of AI and computational chemistry, we're still a few decades from something that good. AI works well in well explored areas, so it can do protein folding of proteins that are a lot like proteins that have been already well studied. It tends to do poorly with novel proteins that don't look like something already in the library. Computational chemistry doesn't rely on prior analysis, but the problem is so hard that even the best programs backed by acres of processing power do, at best, a mediocre job.

We're definitely seeing progress, but the big surprises and benefits are going to be the incremental steps along the way.

Expand full comment

As someone of a like mind I'll try and provide some insight into that style of thinking, which isn't really "conservative" so much as the default for people outside a very specific form of techno-utopian thinking. I believe it comes from a couple of places.

One is pattern matching. If you look at the history of ideas people a.k.a. intellectualism, it's noticeable how frequently there are predictions of some enormous world-shaking crisis which cannot be perceived with your senses, but instead is justified via indefinite extrapolation of very small/short term trends. Invariably, the framing is that the only possible solution is to take extraordinarily disruptive steps as demanded by the intellectuals who can "see" these crises. Malthus was of course the archetype that lends his name to this way of thinking but in the past 150 years there was also at least Marx, the Limits To Growth people, peak oil, the global cooling means a new ice age people, and in the contemporary era we have the overpopulation people, and just lived through a period in which our lives were completely up-ended by predictions that SARS-CoV-2 infections would grow exponentially in one giant wave that infected everyone at once (didn't happen). Plus, indeed, we must not forget the climate emergency people who have a very long history of making predictions that don't pan out based on extrapolation of feedbacks from models that then don't show up in the real data, in the end. We all know this is true but to pick just one of thousands of examples, from March 2000:

https://web.archive.org/web/20091230061832/http://www.independent.co.uk/environment/snowfalls-are-now-just-a-thing-of-the-past-724017.html

"Children just aren't going to know what snow is". It's snowing outside my window right now and children born in 2000 are now full grown adults. Claim doubt is created by propaganda if you like my friend, but the archives of full of such cases and it is irrational to ignore them. This is just basic Bayesian reasoning!

This stuff happens because our society rewards Big Scary Ideas with book deals, TED talks, millions of followers, TV coverage etc. If you end up being proven wrong, it's all swept under the carpet. The incentives are all aligned in favour of crying wolf, there are simply no reasons not to, as people who remember and point out failed predictions tend to get piled-on, ignored by the media, banned from social media and so on. The winners are the people who claim crisis, again and again.

So now we have another claim of impending world ending crisis, AI risk. Like all the others this one can't be perceived just by looking at the state of things and applying some common sense. It's only obvious to the great minds capable of understanding exponential growth. But the sort of people who make these arguments are, in my experience, the ones who don't understand exponential growth! They tend to just see a trend, extrapolate it out ad infinitum exponentially, and then baselessly assume that this is what will definitely happen. It's a form of overconfidence.

AI risk is an especially irritating one because the history of AI is absolutely filled with these sorts of predictions which never panned out - that was one of the causes of the famous AI winter.

So you ask "why is there so much resistance to this idea"? Because it's looks like transparently motivated reasoning, usually in favor of getting grants to study this supposedly existential risk, and it comes from a class of society that engages in such talk way too often, with way too little penalization for getting it wrong.

Finally in case you think I'm some dumb uneducated hick brainwashed by propaganda, please consider that I spent many years working for one of the top AI companies, was reading some AI papers along with the MinGPT PyTorch code just last night, and am excited about the future of AI. Despite that I still think talking about paperclip maximization is a form of grifting similar to the types we've seen dozens of times before.

Expand full comment

The argument & perspective you lay out is reasonable within its scope, which is as you say simply pattern-matching to a similar style of crisis claims throughout history. But pattern-matching a style of argument does not respond to the argument itself, and it does not amount to an epistemologically justifiable rejection of the argument; it is at best a time-efficient reason not to bother looking into whether the argument has merit.

You are an informed person in the field of AI, so I'm guessing you have opinions on AI risk that are more pertinent to the central argument than pattern-matching, and I expect to some degree you are comfortable pattern-matching in this case *because* you reject the argument itself. In which case, in polite, reasonably intelligent company, it would be nice if you laid out your reasons to reject the argument, rather than a thing which is not actually itself an argument (and which is also subject to the problem of self-flattery: "only *really* great minds can see that this is just another instance of self-styled great minds thinking they can perceive a problem that others can't").

The central argument of AI risk, as I see it, is that there is some reason to believe that an entity will appear, somewhere in the next 10-50 years, which is more intelligent and thus competent than human beings are, and that this necessarily represents a risk to humanity if that entity does not very closely share humanity's values and goals.

What parts of that argument do you reject, why, and with what degree of confidence?

Expand full comment

Alright. I'll explain why I reject the existential risk argument in a moment. I try to avoid this debate because it seems the arguments quickly become more a matter of faith than something concrete that can be debated. For example, pointing out reasons why such a scenario is absurd, extreme or otherwise not worth thinking about tends to get an answer like, "yes but even if the chance is only 0.0001% the consequence is the end of the world, so we should still [get money to] study it". But that's an argument that can be applied to almost anything, because there's no way to distinguish at these tiny probabilities between something serious and pure sci-fi. I could make exactly the same argument for funding me to study the risk of a nuclear accident creating mutant rebels with superhuman powers who try to take over the world - sure, it's very unlikely, but if the consequence is the enslavement of the human race then it's important enough to bung me a few million to think about it all day, right?

Never mind. Here's a more specific objection to the argument for you. AI/ML has changed a lot since I first encountered it 15 years ago but a few things remain constant. AI systems are split into two separate phases, training and inference. Training creates a model and is a batch-like process. As the model is built it's continuously tested to measure performance. Once it reaches acceptable accuracy it's deployed to production and inference uses it. At all points the system is monitored 24/7 by a mix of people and automated systems. This is true even for quite trivial and powerless ML systems, simply because the system exists to solve a business goal and if a problem arises, as they always do, it will need fixing. Some systems may be retrained frequently, others only rarely. Because training costs money it's done only to the extent necessary.

Existential AI risk relies on a long series of assumptions that just don't fit the above model. During the training process you take an AI initialized to a random state and gradually refine it. If your AI can issue commands to influence the world then it has to be trained in a simulation (because it will spend lots of time early on issuing random commands) and because training costs money, once again, that process will be carefully monitored. If your AI starts going rogue with that command interface then you'll notice quickly because you're monitoring how well it's learning to use those commands anyway. The moment that gets noticed - and it will - training will be rolled back to the last checkpoint and they'll try again (the process is randomized). This is what alignment means, basically - you watch what it's doing during training and then alter things along the way to ensure the result will meet your business goals.

Existential risk advocates have a series of dodges to try and escape this problem. For example, they say:

1. It will improve exponentially, which will be so fast nobody can stop it in time. Doesn't work: the moment a model's performance is meeting business goals training terminates automatically, because it would waste money to continue.

2. It will be a self-improving AI, i.e. with a feedback loop between inference and a continuous training process. If you don't have that it gets difficult to explain how an AI trained on human output can become radically smarter than humanity itself. But even if we handwave away the sci-fi nature of such an AI, why would any business create one? There are rare cases where continuous training is helpful but AFAIK they're all quite specialized (don't need/want general AI, like spam filters). The direction things are going in with LLMs is instead to train a model occasionally and then give it the ability to ask questions of more conventional IT systems e.g. search engines, just like a human would. This seems more efficient than trying to constantly retrain it because, again, training is really expensive so why go further than necessary.

In summary I don't see why this workflow will change. For as long as training is a carefully monitored batch process separate from production inference, super-intelligent AI cannot arise. And it will very likely remain that way because AI is a tool intended to solve real world problems, but there's no incentive to create an AI that could turn into HAL9000 tomorrow. It'd be a waste of money.

Expand full comment

Your arguments in this comment seem to be 1) the possibility of AI becoming superintelligent is prima facie as absurd as, e.g., the possibility that nuclear accidents could create superpowered mutants; 2) artificial superintelligence (ASI) cannot arise because we would notice it arising early and put a stop to it; and 3) there is no incentive to create ASI anyway.

1) is absurd. There is no one who seriously thinks nuclear radiation can mutate humans in ways that make them superpowered, there is no plausible biological pathway by which that could happen, and no one is giving out money to study the possibility because people don't hand out millions of dollars to study things that have no plausibility. AI risk is not in that category: there are many people who recognize it as a legitimate concern with a plausible path to reality, some of them very prominent and respected, and there is money being made available to study it precisely because there are many funders who disagree with your dismissal of the risk.

2): A)There is no guarantee that the model of monitored training that you describe will be the model under which ASI is developed.

B) current AI's developed under that model are already capable of things their developers do not understand, did not anticipate or observe in training, and thought they had under control but didn't; cf. OpenAI's attempts to prevent ChatGPT from providing "dangerous" information, and people immediately jailbreaking it and getting it to tell them how to cook meth and commit crimes.

C) Human beings turn evil/violent/malicious constantly despite being brought up under the constant supervision of parents, teachers, coaches, etc; the idea that a human level or higher AI could not also turn evil after its explicit training requires your argument 3), that there is no incentive to create something smarter than us and/or capable of continuous learning. This is obviously, absurdly false: there is *massive* incentive to do so, because an agent more capable than humans can do things humans can't but wish they could, including a huge array of things with extreme economic potential. And we have already created narrow AI's that are superhuman in their field and which can be incredibly useful, such as Deep Mind's AlphaFold - the applications of entities that can understand things humans can't should be extremely obvious.

Expand full comment

Sigh, this is why I try to avoid these debates. It immediately turns into assertions of un-falsifiable beliefs.

1 - Is the best available difference between comic book staples like mutant superheros and superintelligent AI that one has Serious People investing in it and the other doesn't? Surely there must be a difference more solid than that? Rich people fund things that don't make sense all the time. FTX was funded and defended by all manner of prominent/respectable people but it was still nonsense that evaporated the moment an outsider looked a bit too closely. Over the past 10 years billions of dollars poured into flying car startups which are now almost all defunct. It's hardly the only example from recent times. Argument by authority isn't convincing to me in this case.

2a - Yes, if we ignore the way things work now whilst claiming our scenario is an extrapolation of it, we can indeed have our cake and eat it.

2b - I can assure you OpenAI observed all those things during eval, were well aware of prompt hacking and launched ChatGPT anyway because it's cool and got them a lot of attention. If you want an example of a company doing otherwise look at Google, which has DALL-E/ChatGPT equivalents internally but refused to make them available for public testing, supposedly because they're afraid of people getting ideologically unacceptable answers (unacceptable to Google). The business goals of OpenAI are different and don't require absolute control over their creation. Total control would actually work against those goals by reducing creativity, which is a major draw.

2c - Human beings do indeed turn evil, although it's often subjective what evil means. That's why society has lots of systems to stop them, systems that would also apply to any AI because what it can do would be limited to the actions its creators can take. That's why the existential AI risk scenario has to posit "superintelligence", a form of intelligence that's explicitly stated to be entirely unknowable (like a god). That's the only way to explain how an AI could override every system society has to stop evil people.

3 - This is just a restatement of the belief that a "superintelligent" AI (whatever that really means) is unknowable and all powerful.

Obviously there are use cases for AI, I already said I'm excited by it. That doesn't automatically imply there are use-cases for generalized superintelligent agents that are allowed to do anything the operator can do. Business is a cost/benefit decision and by the definition of superintelligent AI the cost can be uncapped. Remember that if such an AI does something illegal or immoral it'll be the owner/operator who takes the fall, not the machine.

So that's why I'm not the only one in this thread who perceives AI risk to be a new religion. There's not a single rebuttal anyone can make to the arguments you're presenting here, because literally every possible argument can be dismissed with one of the following:

a. Nothing about present day tech can be assumed, because we're talking about the future.

b. Superintelligence would let AI do that.

c. Superintelligence is inevitable.

These aren't arguments that extrapolate from a base of small and uncontroversial axioms using logic. They're joker cards, rephrasings of "have faith my child!"

Expand full comment
Dec 13, 2022·edited Dec 13, 2022

Two objections:

1. People do cool things for impractical reasons when they have enough money. Training AI to play Go or poker is a waste of money in a certain sense, yet they did it because they could.

2. There are people out there who want to create a superintelligence purely for the sake of it, and maybe some of them don't particularly care about safety. One day, perhaps, computation will be so cheap, training algorithms so advanced, that anyone will be able to create a malicious AI on their laptop. I think we can agree that an strong AI deliberately designed to do ill can destroy humanity. Isn't this enough reason to worry about AI?

Expand full comment

So what if some much more intelligent and competent entity appears in the next 10-50 years. What is it going to do? Is someone hooking it up to the world's nuclear arsenal as in the Forbin Project - a crappy 1970 movie that was fun to watch with the gang from the AI lab? Is it going to get farmers to grow paper clips instead of wheat and lead to mass starvation? If there's a threat to humanity, it is going to be because someone is going to use it in a way that threatens humanity.

Over the years, we've had lots of technologies that have threatened humanity just fine. Other than being harder to weaponize than most of these, I don't see how some super-advanced AI is going to do a better job of it, and no one seems to even be trying to explain this.

Expand full comment

Guy, many people have written about the general possibilities of how something much smarter than us could wipe us out - if you haven't seen it, you haven't looked.

The point of being worried about something that is smarter than you, is that it's smarter than you and you can't predict, or necessarily even understand, what it might do to you. But consider an obvious scenario, given that we've just been through a global pandemic. AlphaFold exists and demonstrates that AI can understand and predict protein behavior better than humans can, and mail order labs exist that will create viable DNA to spec. Something only a little bit smarter than us could engineer an utterly devastating virus and release it into the world using nothing more than email.

Expand full comment

How does Alphafold demonstrate something? Is it running as a background process somewhere announcing its results on a Twitter account? Can it post on Youtube? How can it take advantage of mail order labs to destroy us? Does it have a credit card and an email account so it can buy the appropriate DNA? Where does it get delivered? Does it rent a post office box? How is it going to spread the virus? Email? Has there been a COVID case linked to IMAP or POP usage?

I still feel like I am missing something here.

I could accept that a focused group of people could use Alphafold or one of its smarter successors to engineer a deadly virus and work out a way to spread it. (I'll gloss over the whole problem of how they test it to assure that it actually is deadly and spreads easily.) They wouldn't even need to be a lot smarter than most, though they'd need money to buy the compute time, pay for the synthesis, figure out delivery and pro-actively deal with likely countermeasures. That's a realistic threat, but being threatened by people is a whole different thing from me being threatened by AI software.

Expand full comment

I said what AlphaFold demonstrates: that AI is already capable of understanding and predicting an extremely important biological process better than humans do.

You object that that doesn't matter because AlphaFold is not yet a full-fledged agent capable of taking action in the real world - why is it so hard to imagine that sometime in the next several decades, somebody decides to give an AI that kind of agency? An AI that can directly act in the world will be immensely useful and economically valuable; there are massive incentives for some group to do so. And consider AlphaStar, which is already a superhuman strategic agent (within the limits of StarCraft) - is there some reason to believe that it is impossible for an AI agent to act in that way in the real world?

No one is claiming that we are at risk right now from any existing AI. You are simply being asked to look forward 5, 10, 20 years and imagine the plausible developments. Unless you can convincingly demonstrate that those possibilities are certainly impossible, or that no humans will ever act in a short-sighted, selfish way for their own gain, then you should at minimum stop decrying the work of people who are trying to figure how we can build AI that is thoroughly friendly to human interests.

Expand full comment

Cogent and insightful thanks. Concomitantly we can also neglect massive real issues because of their low salience in our daily lives. For example, while the Twitterati are off their chairs about ChatGPT a few thousand people at COP15 trying to halt the stunning decline in the natural world are almost completely ignored.

Expand full comment

AI risk is not believable because it’s literally (as in actually literally) a new religious movement made up by Yud that worships computers. It is not scientific. If Yud hadn’t told you it was going to enslave you, there would be no reason for the idea to appear in your mind.

In particular there’s no reason to believe in “superintelligence”, that “more intelligent” is “more likely to end world”, or that “AI can develop better AI exponentially”. You can already “develop a general intelligence” by having a baby and people rarely worry that it will destroy the world. Even though it’s capable of it!

> People don't seem to appreciate that even if there's some theoretical argument for why there can't be existential AI risk, you can accept the argument with 99 percent confidence and still think AI risk reduction is a very, very high priority. Why is there so much resistance to this idea?

Yes, rationalists are recruited to a cult by waving math at them that does tricks with infinity so they’ll see something infinitely scary and dedicate their lives to stopping it. Sometimes called Pascal’s mugging.

Probability theory can’t do what you want here - there’s a whole online book about it. (https://metarationality.com/)

Expand full comment

Pacal's muggings only work when you have literally infinite payoff rather than merely very large, and even then only work when you can put probability 0 on infinite payoff from any action other than the one you're being mugged into doing - which seems rare in practice. There are many things which happen with probability less than 0.1% in one's lifetime that it's prudent to take precautions against, e.g being in a serious car accident or there being a pandemic which kills over 100 million people.

The reason AI can develop exponentially in a way a human can't is that you can give an AI more compute indefinitely which will make it smarter automatically, and it's much easier to change the algorithm that an AI uses than it is to change the one human brain's use.

You might be interested in this paper which gives empirical evidence of goal misgeneralisation. https://arxiv.org/abs/2105.14111

Expand full comment
Dec 13, 2022·edited Dec 13, 2022

> The reason AI can develop exponentially in a way a human can't is that you can give an AI more compute indefinitely which will make it smarter automatically, and it's much easier to change the algorithm that an AI uses than it is to change the one human brain's use.

No you can’t for most interesting; this is a trick based on not defining “smarter”. (The closest thing is search problems like AlphaZero, but that can only get faster at searching its own “imagination”, which doesn’t have to interact with real life.)

Also, who pays the power and maintenance bill for this exponential growth? ChatGPT looks easy because Microsoft paid millions for you, but it won’t be free forever.

Retraining ML models is a relatively difficult process that doesn’t happen automatically, so they’re not great at changing or learning new things either. We’re just very easily satisfied with them right now, so we don’t care that they regress all the time.

Expand full comment

If we get to the point where AI systems can fully automate AI research it seems like the capabilities of AI systems would grow proportionally to their current capability which gives exponential growth. The prize for doing this is essentially all of the labour share of income, so I think there would be strong incentives to pay for the training runs.

The same argument applies to availability of compute but here the threshold is being able to fully automate chip design and manufacturing.

Since compute availability and quality of learning algorithms are complements it probably takes getting to the point where both AI research and chip design and manufacture can be automated to get long run exponential growth.

There is complexity there though from the fact that it potentially get's exponentially more computational expensive for linear increases in performance.

Expand full comment
Dec 12, 2022Liked by Noah Smith

If you're looking for AI slowdown risks, the imminent end of Moore's law is a big one. (Don't believe me that that's happening? Nvidia's CEO thinks it is: https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618).

We've gotten a lot of the recent AI progress via a faster-than-exponential increase in the amount of computation used to train the models. That can't be sustained, and it stops being sustainable sooner if hardware isn't getting better at an exponential rate. There are ways around it -- specialized (neuromorphic/sparse/etc) hardware, algorithmic breakthroughs, quantum computing -- but if those don't pan out or take too long, it's reasonable to expect the current wave of progress to slow down.

Of course, there's enough innovation here already to disrupt quite a few industries even if progress stopped tomorrow.

Expand full comment
author

That's really interesting, thanks! I've read quite a bit about Moore's Law slowing down, but I don't know much about relationships between compute costs and AI performance!

Expand full comment
Dec 12, 2022·edited Dec 12, 2022Liked by Noah Smith

It's not so much that Moore's Law is dead yet, it's just running into fundamental physical constraints.

Right now, the industry is currently at the 3nm node on the roadmap. That's 2-3 dozen atoms across. 2nm is up next in 2024 (public knowledge). To be clear, most of the components in a chip on any given node are MUCH bigger than that - for 2nm, that's 20-45nm - but the general principle remains that the smaller we get, the more difficult it gets.

Put plainly, for a semiconductor to _semiconduct_, it needs to be big enough for most atoms to be Si and some to be As/Ge or whatever else (most modern chips are more exotic than that, but it's the core principle). You may be able to get a single line of wire down to a single atom, but at some point it has to fatten out and semiconduct -- you can't make a single-file line of Si atoms, toss in some As/Ge here and there, and expect that to work.

Anyways, while there's plenty of roadmap left, at some point we get down to arranging lines of single file atoms -- roughly speaking, somewhere in the 0.1-0.2nm range. And you can't go smaller than that. At least not with atoms. Maybe by that point, someone's come up with an exotic nuclear computer that runs on quarks. Or quantum computing has taken off. The point is, you need new physics, not a better photolithography machine.

While there are all kinds of esoteric and more strictly accurate reasons why Moore's Law is slowing down _today_, it all boils down to "it becomes harder and harder to wrangle quantum mechanics to do what you want it to as you reach a single-atom wire size".

Expand full comment

Going smaller is not the goal in itself though. We just need to get lower power use, and that is probably still quite far away from any limit

Expand full comment

Lower power use comes from smallness. Smaller components mean less resistance and less power use. That's why everyone's obsessed with continually shrinking the chip.

Expand full comment

Figure 1 in here is a good overview of compute usage: https://arxiv.org/abs/2202.05924.

The Economist wrote about it too and has prettier graphics: https://www.economist.com/interactive/briefing/2022/06/11/huge-foundation-models-are-turbo-charging-ai-progress

Expand full comment

Maybe before the limits are reached an AI will discover new ways to increase transistor counts and then the Kurzweil predicted virtuous cycle will begin to take hold.

Expand full comment

The counts aren’t the only limit on compute power. AI would probably solve a dozen other engineering problems before it turned to raw transistor counts.

Expand full comment

Sure, I just mentioned counts since that’s what Moore’s law is defining and what the article was concerned about losing. When the AI machines start helping design their own components, and improve their own performance I think the curve will start bending up again.

Expand full comment

Computing cost will obviously decrease swiftly during this decade (maybe just a bit slower than perivous decade) even without any breakthroughs. See e.g. https://www.imec-int.com/en/articles/20-year-roadmap-tearing-down-walls

Expand full comment

Why? Modern methods can parallelize across multiple devices. You don't necessarily need denser chips, if you can just buy more of them and use them simultaneously.

Expand full comment
Dec 12, 2022Liked by Noah Smith

I'm struck by what seems like a contradiction between this post and your more pessimistic recent post about decreasing future stock market returns. If this techno-optimisitc future with free unlimited energy, AI, biotech, robots, etc. is realized, couldn't all this free energy and other innovation free up a lot of $$ that could come back to us in stock returns? I take your point that the actual companies delivering innovation (solar panels, etc.) often realize little value themselves, but it seems our current civilization spends an awful lot of our resources on high cost energy extraction and use, healthcare, even housing etc. - all of which costs could potentially plunge through technological innovation.

Expand full comment
author

Well, note that in the 1970s, we invented many of the technologies that would become the foundation of the IT boom, but stock returns were terrible. New tech takes a while to commercialize into profit-maxing mega-corporations. Wait for the 2040s, I'd say...

Expand full comment

‘For example, a team at the University of Sydney just claimed a breakthrough in sodium-sulfur batteries, which could be made without the relatively expensive metals (especially lithium) that go into lithium-iron batteries’

If I know anything about the Australian mining industry (& I do) given the recent discoveries of Lithium down here I’d expect BHP & Rio Tinto tanks & artillery to be surrounding U Syd to destroy these heretics quick smart 😀😀

Expand full comment
Dec 12, 2022Liked by Noah Smith

Appreciated the mention of "massive desalination" through cheap and abundant energy. There are days when California has to pay states to take excess solar production, and I've always wondered why we're not putting that extra energy into desal. Perhaps the desal tech isn't there yet, but my understanding it that desals biggest hurdle is its energy intensity. Cheap, abundant solar can help us re-green areas becoming increasingly arid, like much of western US.

Expand full comment

"If it extends the powers of our minds the way physical technology extended the powers of our bodies, another productivity boom is probably ahead." This is a great way of thinking about machine learning derived systems.

The steam engine cost a lot of people their jobs, but it benefited many others. It made male physical strength far less important to economic success -- enter the successful nerd. Allowing men to focus on their minds (and women to participate in labor) gave us a huge economic boom. However, it had other consequences. As a group, men are considerably physically weaker today than they were 200 years ago; some studies estimate that a American, colonial, frontier woman could probably defeat most males today in arm wrestling.

In the 19th century revolution, broader economic opportunities for weaker men lead to a productivity boom but also a physically weaker population, as economic (and reproductive) success became less tightly coupled from physical prowess. Women get 60% of all university degrees today, so today's machine learning revolution won't just affect males. If we were to mirror our experience of the first industrial revolution, we should expect this new one to result in an increase in a productivity boom as machine learning systems provide economic opportunities for the less intelligent. Considering how smart-biased our world has become, and how we treat and look down on those who are cognitively challenged, this ought to be a welcome thing.

The children's book author featured here is a perfect illustration. This guy is not smart / talented enough to produce and illustrate his own children's book. I don't mean to rag on him; I'm not either nor are most of us. However, AI allows this comparatively talentless hack to compete with professional author/ illustrators. This is the modern version of a John Henry duel. Expect a similar outcome: John Henry won his duel, but the steam engine eventually destroyed his industry. Should we expect AI to do the same to literature? Probably.

There is one more prediction that springs from this comparison though. Just as physical strength declined in the century of the steam engine, a corresponding decline in average IQ over the next century would not be unexpected. I doubt we'll get to Wall-E territory, but we might.

Expand full comment

Really interesting thought!

But perhaps, just as a modern person could easily out-brawn a colonial American by using their car (or gun or washing machine or whatever other modern tool was relevant for a large class of common physical problems) I suspect this means a future lower innate IQ person could outsmart a contemporary person using the tools they are completely familiar with. (At least, for a wide range of common thinking scenarios.)

Expand full comment

Writing is an external aid to memory, yet literate people don't usually lapse into amnesia. Possibly we'll handle machine learning aids without dropping general mental ability.

Expand full comment

They do, actually. I can't remember the book since I read it so long ago, but the ancient Greeks spent a great deal of time training their memories. They built entire virtual palaces in their minds with pictures in particular rooms and wall niches that served as mnemonics to remember stored information. This is why Socrates didn't write anything down; he thought literacy dulled the mind.

You've actually presented another great example of the same phenomenon: technology diminishing a human capability by rendering that capability less useful or important.

Expand full comment

We externalize that ability. I haven’t personally learned a phone number in decades, but I’ve definitely developed a much greater ability to learn phone numbers by entering them into my phone.

Expand full comment

People lose skills any time technology automates them. That's not intrinsically bad. I'm not even convinced that having part of our creativity, most of our independent thinking, and all of our paper pushing automated by AI would be all that terrible.

However, the machine is just simulating creativity. It requires training from real human painters / musicians / authors in order to "learn". So is a culture which outsources its creative work to an AI consigning itself to stagnation? Or could an AI learn from other AIs? Honestly not sure.

Expand full comment

I asked ChatGPT to write an essay comparing the sculpture of Michelangelo to the music of John Cage. It made two boring points but one interesting one saying they both used spiritual/religious themes, whether Greco-Roman or Buddhist.

I used some creativity in coming up with the juxtaposition, but it noticed a parallel that I hadn’t (until it pointed it out). If we can manage that level of creativity, that’s something.

Expand full comment

Exactly! When an AI plays Go is plays like no human player ever would. In the first match, the programmers almost stopped the match thinking the machine was malfunctioning... until it was a few moves away from winning and everyone saw the endgame.

That's what machine learning systems are really good at, coming up with connections that humans miss. Your essay example is perfect.

Expand full comment

That’s because the phone is what uses the numbers these days. It’d still be useful to remember them if we still had payphones or you went to jail.

Writing hanzi/kanji is probably the biggest issue of memory loss due to phones, actually, since you do need to hand write those sometimes.

Expand full comment

I suspect you're right. As the old adage goes: God created men and women; Smith and Wesson made them equal. However, a physically weaker population has bred a number of problems. What is the corresponding mental equivalent of obesity?

Expand full comment

Are people really that much weaker? They're generally taller, larger and in better health.

Expand full comment

We're healthier but far more sedentary. I've also seen estimates from "scientific" sources on this, but they're only estimates. In light of the level of physical labor in the pre-industrial world, for both men and women, the theory makes sense as well. The dawn of animal power probably had the same effect. Evolution works.

Expand full comment

In support of that, human jaws aren't what they used to be back before we discovered cooked food and stone tools. (Of course, we weren't exactly human back then.)

Expand full comment

Posts like these are why I have a paid subscription. Thank you for another amazing post!

Expand full comment
author

Thank you!!

Expand full comment
Dec 12, 2022Liked by Noah Smith

Can’t wait for the future.

Expand full comment
author

Luckily, it arrives at an inexorable pace! :-)

Expand full comment
Dec 12, 2022Liked by Noah Smith

" if an autonomous robot uses AI to move around, is it predicting where it should go, or generating ideas of where to go?"

These are concretely two different approaches in robotics. Some approaches involve predicting which outcomes will succeed and taking them (i.e. solving a Bellman equation and then maximizing predicted utility) and others involve generating paths from a probability distribution, after conditioning on "evidence" that you were in fact successful in your goals.

Interestingly, though, there are derivations showing that "generate the most likely path, given I succeeded" and "solve the Bellman equation to find optimal path" end up being (mostly, up to important details) mathematically the same.

Expand full comment
author

I'm not surprised those work out to be the same, sounds like the path integral formulation of quantum mechanics! 😃

Expand full comment

Reminds me of predictive coding: your nervous system moves your muscles and limbs (on this theory) by predicting it'll experience sensations consistent with them having moved. Lower levels of processing, trying to minimize the sudden spike in prediction error, work out the motor program.

Expand full comment

I think Noah should temper his glee over battery technology improving significantly in the near future. The Thermodynamics of batteries are well established. We know precisely the maximum amount of free energy available from any relevant electrochemical reaction. There are no magic bullets. Battery development is primarily in figuring out ways to create much higher, electrically connected, surface areas for reaction. The Lead Acid battery is completely dependent on the porous oxide growth that makes Pb:Acid batteries practical. Efforts to create similar structures in other battery chemistry reactions are dependent on expensive to make nano-structures. This will not change. There is no foreseeable battery technology that can compensate for the intermittent production of electricity from wind and solar at scale.

Expand full comment
Dec 12, 2022Liked by Noah Smith

AI advancement will depend somewhat on the extension of Moore’s Law. Some think Moore’s Law is approaching a dead end. But ASML is working on a 2nm AUV lithography machine, as well as beginning work in pentameters and femtometers in its R&D labs.

Expand full comment

"This means we’re not just going to save the planet from destruction"

I feel that you're *massively* downplaying climate change threats: yes the IEA revised it's estimates upwards, but we are still lagging behind on both annual investment needs and annual emission reductions.

From the same IEA report: "For electricity, in order to reach the installed capacity needed to generate 69% of electricity from renewables by 2030, average annual net additions need to be *30% higher for solar PV and more than twice as high for wind*.

Clean energy investments should roughly triple from $1tn/annum today to $3tn every year for the next 30 years if we want to stay at 1.5 (i.e. we won't). And that's assuming we can (i.e. we have enough space, smart grids are developed, we have enough resources and space to build those RE power plants, etc.). (https://www.economist.com/leaders/2022/11/03/the-world-is-missing-its-lofty-climate-targets-time-for-some-realism)

👆 and that just for energy generation, but investments are also lagging behind on batteries, hydrogen, carbon capture, biofuels, etc. (see: (https://www.iea.org/reports/world-energy-investment-2022/overview-and-key-findings)). We are also miles away from the energy efficiency progress we should be making if we wanted to see the annual GDP/emissions decoupling of 7.5%.

Also, battery prices are going up because supply chains are not ready to match the growing demand, which itself is due to lack of supply chain investments. Supply chain underdevelopments risk derailing the energy transition (https://www.mckinsey.com/industries/oil-and-gas/our-insights/could-supply-chain-issues-derail-the-energy-transition).

So the IEA revising its estimates upwards does not mean we're going to save the planet from destruction (and anyways, the planet will be fine in long-run. We might just not be there to enjoy the views). The reality is that we're still lagging behind on every existing metric ...

Same thing with fusion - I agree it is exciting and that we should be optimists. But even the most optimists agree we won't have it in time for it to be meaningful at keeping temperature below 1.5/2 degrees.

Expand full comment

I completely disagree, I think he is massively overplaying it, there is no way that 2 degrees warming (which is the most likely scenario even if the clean energy trends stay at their current trajectory) is going to cause destruction of the planet, let alone cause mass extinctions or great losses to humanity.

Expand full comment

What data/info do you base your opinion on? I base mine on IPCC reports, which I understand represent the consensual view of the scientific community. They seem to believe a +2 degrees world is one with high risk of human and economic loss (and actually, you'll see that no one ever said that global warming would cause the destruction of the planet - just that of biodiversity).

https://www.ipcc.ch/report/ar6/wg2/downloads/report/IPCC_AR6_WGII_SummaryForPolicymakers.pdf

https://jancovici.com/en/climate-change/scientists/what-is-the-ipcc/

I also base my opinion on what is happening in the world today - mass animal extinctions are happening because of climate change, and people in developing countries are already dying from more frequent extreme events (which again the scientific community attributes to climate change). In developed countries, extreme heat is already impacting agriculture, forests, etc. causing economic harm.

Expand full comment

The first line of your comment quotes the article saying exactly that: "This means we’re not just going to save the planet from destruction". As far as biodiversity, you say mass animal extinctions are happening now, please name one animal that has become extinct due to climate change. I’m sure some animals will become extinct as they always have, the report you linked says at 2 degrees warming that 3-18% of assessed species would be facing possible endangerment. It doesn’t say how covering deserts with solar panels or bird endangering windmills would change that number. For scientific discussion on IPCC reports I get good analysis from https://rogerpielkejr.substack.com.

Expand full comment

1️⃣ I was quoting Noah's articile :)

2️⃣ On biodiversity, here numbers to better understand the scale of the collapse of life on earth:

👉 69%: Average drop in populations of mammals, birds, amphibians, reptiles, and fish since 1970 (WWF, Living Planet Report 2022).

👉 1 million: Number of species threatened with extinction, many within decades (IPBES Report, 2019).

👉 Tens to hundreds of times: The extent to which the current rate of global species extinction is higher compared to the average over the last 10 million years (IPBES Report, 2019).

Expand full comment

Thanks for a great article.

When AlphaFold was released, I figured we’d have a long way to go before taking this technology for predicting protein folding — which is neat from a theoretical biology perspective — and adapting it for de novo protein design — which is a lot closer to the CAD-for-biology build-anything-on-earth-with-a-cellular-factory vision of a biotech driven future. But just this week, here’s a fantastic article on using a model like alphafold to design proteins! https://twitter.com/davejuergens/status/1601675072175239170?s=46&t=oF5U3APV379232QDZnUR1Q

It’s a remarkable time for technologists.

Expand full comment

Nuclear fission provides about 20% of the power in the United States. Nuclear provides about 75% of France's electricity. In an otherwise wide-ranging essay, ignoring fission is a glaring omission. The intermittency problem regarding solar and wind is far more significant than is acknowledged. Notably, with California's high solar penetration, grid integration is accomplished by inefficiently and intermittently dispatching natural gas-fired generation, calling into question the net environmental benefits of solar. Despite spending tens of billions on California solar since 2010, the U.S. EIA shows that since 2019, natural gas consumption for California electricity generation is climbing.

The proposed technical solution of batteries will not scale up to the required amounts for decades, if ever. At current prices, to build batteries to meet just 24 hours of California's electricity consumption will require more than twice the entire state of California budget. Those batteries will be in a perpetual replacement cycle since grid storage batteries last only 10 - 15 years at best. Finally, noting that the California transportation sector has the greatest amount of emissions of any sector, those batteries would best be reserved for transportation instead of displacing natural gas for electricity generation.

In summary, "Split, don't emit." To learn more, please visit the Californians for Green Nuclear Power website at CGNP dot org.

Expand full comment

We're waiting for the next breakthrough. Even in France, nuclear fission has had massive cost overruns. France hid them until around 2000 claiming that the matter was a military secret, but everyone else's overruns have been all too public. If someone starts delivering modular nuclear plants or using some other nuclear technology, on time and on budget, then people will take notice.

Expand full comment

A nice synthesis of AI advances and biotech was published last week. Researchers use a diffusion model similar to DALL-E 2 to generate custom protein structures based on “prompts” in the form of geometric constraints. This allows for the creation of bespoke proteins to, for instance, coordinate with metals, or bind to ligands or other proteins. Protein sequence/structure state space is huge, so this is a difficult problem and consequently a huge finding. The generative models were validated with wet lab and structural experimental. This will be revolutionary in many ways, probably as important an advance as AlphaFold2 (indeed, it uses a similar model, called RoseTTAFold in its pipeline).

Expand full comment

This is a "many-threaded" post that's provocative and useful.

One caution on techno-optimism, though. We should question what we describe as an "advance" and bear in mind it is not always without cost. We are always behind technological development by super-smart people at companies focused on innovation. We often need to play catch-up on monitoring and regulating some of technology's less desirable (read: frightening) long-tail effects.

Disruptions represented by technology can be industry-altering and remarkable; they can also be profoundly damaging--including to democratic institutions and our civil society. For example, who'd have thought Apple, which ran its famous 1984 Super Bowl commercial, would end up enabling Big Brother at the same time? (Feel like chilling your blood today? Read this: https://www.foreignaffairs.com/world/autocrat-in-your-iphone-mercenary-spyware-ronald-deibert).

I'm an Apple fan. I use Apple's products. (Who doesn't?!) But as Benedict Evans, whom you cite, has argued, when companies track you and seek to serve up content in line with your interests, Apple calls it an invasion of privacy. When Apple does the same thing, the company dubs it "personalization."

Ah, technology! And the interests to which it is put! Vaccine technologies for Covid-19 were life-saving, yes. But much of the technology used (mRNA) was already in place and had been for almost two decades. The speed of vaccine rollout was largely in removing regulatory barriers and accelerating approvals at the FDA.

Technology can improve lives and can save them. We should be grateful for such advances. But cutting-edge technology is a blade. If it's badly used, we bleed.

Expand full comment