What if everyone is wrong about what AI does?
Both critics and supporters seem to think AI is a "human remover". What if they're both wrong?
There are two basic debates about AI. One is the “AI safety” or “X-risk” debate, which is about whether AI will turn into Skynet and kill us. But the most prominent and common debate is about AI taking jobs away from humans. What’s interesting about this debate is that practically everyone involved, from AI’s biggest boosters to its biggest critics, seems to agree on the basic premise — that the primary function of AI is as a direct replacement for human beings. In general, people only disagree about what our reaction to this basic fact should be. Should we slow down AI’s development intentionally? Should we implement a universal basic income? Should AI engineers and their shackled gods retreat behind towering fortress walls guarded by legions of autonomous drones, letting the rest of humanity suffer and die as GPT-278 sucks up all of the world’s energy for data centers?
OK, so I made the last one up.1 But this is the basic shape of the debate — everyone accepts that AI is about replacing humans, and then they just argue about how to deal with that. But what if the basic premise is wrong?
So far, most technologies that we’ve ever invented have ended up complementing human labor instead of cutting humans out of the equation. Power looms and steam shovels and plumbing replaced human muscle power for certain tasks, but they required humans to operate. Computers replaced human calculation, handwriting, etc. for certain tasks, but they required humans to operate. In the end, there were more things for humans to do after these technologies replaced some of the things that humans used to do. After centuries of automation, most people in rich countries still have jobs.
Pretty much everyone seems to be assuming that AI is fundamentally different from all the other technologies that have gone before. The underlying assumption here is that AI is much more of a substitute for human labor rather than a complement.
In fact, that might be true. Back in March, I wrote a post about what would happen if it is true, and if AI’s capabilities keep expanding rapidly into the far future:
To recap, the basic story of that post was that as long as there are AI-specific constraints in the economy — limitations on compute, or data, or on the amount of energy that governments allow AI to consume — then humans will still have high-paying jobs, even if AI does everything better than humans. This is because of the principle of comparative advantage — if there’s a limited “amount” of AI in the world, it can’t do infinite amounts of everything at once, no matter how good it is. So the AI will be allocated to the tasks where it produces the most value, and humans will do everything else (even if AI would do it better). And since in this scenario AI makes the world very very productive and rich, the human workers will be very well-paid.
In other words, even if AI really is a “human remover” in any specific job, it might end up not removing humans overall. Hooray!
But OK, that was just a fun thought experiment. Back here in reality, the premise that AI mostly substitutes human labor might actually be wrong! AI might not be the “human remover” everyone thinks it is — it might be more like a machine tool for the human mind.
In fact, as I’ll argue, the widespread conviction that AI’s basic function is to replace human workers might be holding back the technology itself. AI entrepreneurs and engineers might be so focused on the vision of human-replacement that they might be ignoring much more productive — and much more lucrative — business opportunities. In fact, this could have parallels to the way entrepreneurs initially tried (and failed) to use electricity to power their factories in the early 20th century.
First, let’s look at the various ways that AI could affect human jobs.
Taking Acemoglu seriously (but not literally)
Daron Acemoglu, who is possibly the world’s top economist right now, has been one of AI’s fiercest critics. He and Simon Johnson wrote a whole book arguing that AI would increase inequality and that its development needs to be controlled. He wrote a paper in 2017 with Pascual Restrepo claiming to find evidence that industrial robots take away human jobs and enrich capitalists. He recently wrote a paper arguing that AI will only add a tiny amount to productivity growth. And so on. When Jeran Wittenstein recently interviewed Acemoglu for Bloomberg, the economist predicted that AI is vastly overhyped and that “a lot of money is going to get wasted.”
Many of Acemoglu’s claims on this topic are, in my opinion, pretty overdone. His book had highly questionable readings of history, as well as various weak arguments. His paper on the deleterious effects of industrial robots was contradicted by basically every follow-up study. And as Maxwell Tabarrok astutely noted, the only way Acemoglu was able to arrive at such pessimistic predictions for AI’s productivity effects was to more-or-less arbitrarily ignore key pieces of his own theoretical model.
But nevertheless, Acemoglu has done us a very important service by breaking down the different ways that AI could affect human jobs. Here’s a list of the four things AI could do, paraphrases from Section 2.2 of Acemoglu’s paper “The Simple Macroeconomics of AI”:
AI could replace human jobs. This is the one everyone tends to focus on. Acemoglu singles out “simple writing, translation and classification tasks as well as somewhat more complex tasks related to customer service and information provision” as candidates for job elimination.
AI could make humans more productive at their current jobs. For example, GitHub Copilot helps people code. This might either create or destroy jobs, depending on demand.2
AI could improve existing automation. Acemoglu suggests examples like “IT security, automated control of inventories, and better automated quality control.” This would raise productivity without taking away jobs (since those tasks are already automated).
AI can create new tasks for humans to do. In a policy memo with David Autor3 and Simon Johnson, Acemoglu speculates on what some of these might be. They suggest that with the aid of AI, teachers could teach more subjects, and doctors could treat more health problems. They also suggest that “modern craft workers” could use AI to make a bunch of cool products, do a bunch of maintenance tasks, and so on. (As I’ll discuss later, it’s actually very hard to imagine what new tasks a technology might create, which is one big problem with discussions about new technologies.)
In fact, this taxonomy applies to every technology, including all the things we invented in the past. These are all movies we’ve seen before. Automatic telephone switching replaced telephone operators. Steam shovels made construction workers more productive. Power line insulation improved the (automatic) transmission of electric power. The internet created opportunities for digital content marketers. And so on.
The big weakness of Acemoglu’s paper, as Maxwell Tabarrok points out, is that Acemoglu just sort of arbitrarily assumes that #3 and #4 on this list won’t happen. Particularly egregious is Acemoglu’s hand-waving dismissal of the idea that AI will create new tasks for humans to do:
Potential economic gains from new tasks aren’t included in Acemoglu’s headline estimation of AI’s productivity impact either. This is strange since he has written a previous paper studying the creation of new tasks and their growth implications in the exact same model…
The justification he gives for ignoring this channel is weak…Instead of incorporating possible gains from new tasks, he only focuses on the “new bad tasks” that AI might create e.g producing misinformation and targeted ads…There is zero argument or evidence given for why…we should expect new bad tasks to outnumber and outweigh new good ones. [Acemoglu] doesn’t end up including gains or losses from new tasks in his final count of productivity effects, but this process of ignoring possible gains from new good tasks and making large empirical assumptions to get a negative effect from new bad tasks exemplifies a pattern of motivated reasoning that is repeated throughout the paper.
Tabarrok is absolutely right. Acemoglu just hand-waves away the idea that AI could create new useful things for humans to do. In doing so, he basically ends up assuming his conclusion — that AI will raise inequality and have only minor productivity effects.
Why Acemoglu chooses to do this is a topic for another day. Instead, I want to focus on another possibility — that AI entrepreneurs and engineers are also ignoring #4 on the list above, to the detriment of the AI industry as a whole.
I talk to a lot of AI engineers, product managers, and entrepreneurs. Last night I went to a house party and talked about AI and jobs. This morning I went to brunch and talked about AI and jobs. I hear a lot about this subject. And it’s always, always about “replacing” humans, “eliminating” human jobs, etc. I can’t recall ever hearing anyone in the industry talk about AI giving humans new jobs to do.
Why not? One possibility is that it’s cultural — maybe AI engineers and entrepreneurs are a bunch of elitist techbro types who just want to soak the working class and cut labor costs, etc. etc. This is Acemoglu and Johnson’s hypothesis in their book Power and Progress. But I don’t think that’s it — at least, not in most cases.
I think the real issue here is that replacing humans at existing tasks is very easy to imagine, whereas new tasks that humans could do with the aid of new technology are very hard to imagine. It’s just a heck of a lot easier to think of a task that people already do, than to think of a task that no one has ever done before. Pure automation — having a machine do exactly what a human used to do — is a simple, obvious, relatively unimaginative use for technology. Creating new tasks, on the other hand, usually requires creating whole new business models, which is very hard.
In fact, I think history shows us that this problem is pretty common. My favorite anecdote is how electricity was used for manufacturing.
The case of electricity in factories
Some applications of electricity are pretty simple and straightforward — for example, you can use an electric light bulb to light a building at night instead of candles. But it’s not immediately clear how to use electricity to power a factory.
In the late 1980s, people were wondering why the computer revolution hadn’t yet turbocharged productivity growth. In 1987, Robert Solow famously said: “You can see the computer age everywhere but in the productivity statistics.” In response, some economists started thinking about how long it takes for technological innovation to show up in aggregate productivity.
Paul A. David wrote a very interesting pair of papers in 1989 and 1990 arguing that a big lag between innovation and productivity is the norm. He likened computers to electric power in factories, which became common in the 1920s. In fact, electric power had been available for decades. So why did it take so long to be adopted?
David argues that it was because electrifying factories required changing the whole way that factories operate. Tim Harford has an excellent and highly readable summary of David’s argument, with some nice historical pictures. Basically, at first, industrialists just tried to replace their old steam boilers with electric dynamos, but this didn’t actually save them money, and they went back to steam.4 It was only once some geniuses thought of using electricity to change the whole shape of a factory that they were off to the races. Harford writes:
In 1881, Edison built electricity generating stations…Yet by 1900, less than 5% of mechanical drive power in American factories was coming from electric motors. The age of steam lingered…Why? Because to take advantage of electricity, factory owners had to think in a very different way. They could, of course, use an electric motor in the same way as they used steam engines. It would slot right into their old systems.
But electric motors could do much more. Electricity allowed power to be delivered exactly where and when it was needed…[A] factory could contain several smaller [electric] motors, each driving a small drive shaft…As the technology developed, every workbench could have its own machine tool with its own little electric motor…Steam-powered factories had to be arranged on the logic of the driveshaft. Electricity meant you could organise factories on the logic of a production line.
There are many advantages of having a factory made up of many little independent workstations instead of one giant assembly of gears. The factory can be much lighter and cheaper to build, and you can spread it out more. You can save power by running machines only when they’re needed. And perhaps most importantly, you can concentrate production resources dynamically wherever there’s a bottleneck, spending less time and labor on easier tasks and more on harder ones, to keep production moving.
It wasn’t until the 1920s that entrepreneurs and engineers finally started to figure this out. That was more than 30 years after electric power was introduced! At first they could only think to replace the old power source with the new one, in a 1-for-1 sort of way — an obvious, simple, but ultimately not very productive idea. It took a long time and a lot of genius to realize that electricity enabled whole new kinds of production.
David argues that this is the normal way things happen, writing that “overlaying of one technical system upon a preexisting stratum is not unusual during historical transitions from one technological paradigm to the next.” The easy, obvious, unimaginative idea is always to just slot the new technology into the old production paradigm. Inventing a new paradigm, built from the ground up around the new technology, is hard — but when it happens, productivity really zooms.
And crucially, the transition from steam factories to electric factories created lots of new tasks for humans to do. It enabled lots of new machine tools, which needed human operators and humans to maintain them. It enabled more systematic quality control and production planning, which required humans too. Logistics was transformed as well. In contrast, simply ripping out steam boilers and putting in electric dynamos wouldn’t really change the set of tasks humans did — even if it had yielded gains in energy efficiency.
The general lesson here is that new business models are what drive both the productivity gains and the job creation from new technologies. Occasionally, new things just directly replace old things 1-for-1, with little change in how you use them — for example, word processing replacing typewriters. But often, you have to build new business models around the new technology. In fact, the more general-purpose a technology is — electricity, computers, the internet — the more scope there probably is for building new business models around it.
Lessons for the AI age
Acemoglu basically assumes that people aren’t going to build new business models around AI (except for scammy stuff like slop and misinformation). This is why he thinks that AI will have limited benefits for productivity. It’s also why he predicts that the threat of mass automation will be limited, at least over the next decade:
By his calculation, only a small percent of all jobs — a mere 5% — is ripe to be taken over, or at least heavily aided, by AI over the next decade. Good news for workers, true, but very bad for the companies sinking billions into the technology expecting it to drive a surge in productivity.
“A lot of money is going to get wasted,” says Acemoglu. “You’re not going to get an economic revolution out of that 5%.”
I don’t know exactly where Acemoglu gets his 5% number. But it’s consistent with the idea that replacing human employees with AI, in a straightforward 1-to-1 manner, is unlikely to yield big productivity gains — just like replacing steam boilers with electric dynamos in factories.
There are certainly a few occupations where AI can just take over for a human — call centers being one of the most obvious. But so far, such easy wins are few and far between. AI revenue is expanding at a rapid clip, but — as analysts like David Cahn have pointed out — not yet fast enough to justify the awesome amount of capital being spent. OpenAI, which dominates all other AI startups in terms of revenue, makes $5 billion a year annualized — a small fraction of the amount of money companies are planning to spend on AI in the next few years. Big companies like Microsoft claim they’ll make $10B a year, but even several companies making that much would be modest relative to the hundreds of billions or trillions in investment spending.
Skeptics like Goldman Sachs’ Jim Covello have recently argued that there are few obvious use cases for the technology yet — basically, that LLMs, in particular, haven’t found product-market fit:
We estimate that the AI infrastructure buildout will cost over $1tn in the next several years alone…What $1tn problem will AI solve? Replacing low wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed in my thirty years of closely following the tech industry…
Salesforce, where AI spend is substantial, recently suffered the biggest daily decline in its stock price since the mid-2000s after its Q2 results showed little revenue boost despite this spend…[W]e’ve found that AI can update historical data in our company models more quickly than doing so manually, but at six times the cost.
McDonald’s has a similar story, abandoning its AI drive-thru experiment after the system made too many mistakes.
Covello isn’t describing the AI future — he’s describing the present. It basically sounds like companies are trying to do the steam-boiler thing — they’re trying to simply replace human workers with LLMs, and this usually doesn’t work. AI as a “human remover” just doesn’t seem to be economical yet, except in a few niches.
Note the word “yet”. Pretty much every AI engineer and entrepreneur I talk to believes that this is a temporary state of affairs, and that further scaling will cause LLMs to reach a threshold where suddenly you can replace most human employees with AI on a 1-for-1 basis — where Acemoglu’s 5% very rapidly goes to 95%. For example, in my recent podcast interview with Dario Amodei of Anthropic, he argued that AI will get good enough that the “dumbass use cases” — direct human replacement — become feasible across the board:
If this is true, then maybe AI entrepreneurs don’t have to think about how to be as clever as the industrialists who reorganized their factories around electricity. Maybe they can just scale, scale, scale all the way to “AGI”, and all the economic problems will solve themselves.5
Maybe. Or maybe not. AI researchers seem to treat “hallucinations” — a euphemism for “AI lying” — as statistical errors that will be corrected as the models get more powerful. But it’s possible that hallucinations are intrinsic to LLMs — a form of bias instead of a form of variance. OpenAI’s new o1 model can solve some tricky math problems, but it also hallucinates much more than older models. It’s quite possible that when it comes to AI, power and truthfulness just aren’t correlated.
If that’s the case, then there are intrinsic differences in what humans and AIs do well. AI researchers seem very enamored of the idea that intelligence is a generalized, universal quantity, but I’m not so sure that’s true. It seems quite possible that the AI we’ve created is a truly alien intelligence — or a Lovecraftian one.
If that’s true, it raises the chance that humans and AI will complement one another in the workplace, rather than being perfect substitutes. That’s good news for “task creation” — it means that there will be lots of new jobs working alongside AI. But that task creation will require ingenious business models, analogous to factory electrification. The “dumbass use cases” won’t be enough — we’ll have to actually reorganize production around AI in ways we can’t even conceive of yet.
I do worry that AI founders and engineers are spending insufficient time thinking about those new business models. They’re all so focused on the idea that we’re going to “scale to AGI” and replace all the human workers that they might be neglecting far more transformative — and lucrative — possibilities. Time will tell, I suppose; perhaps there will have to be an AI investment bust in order to prompt people to search for new, more creative ideas about what to do with the technology.
It might have been slightly inspired by Jack Vance.
For example, GitHub Copilot helps software engineers code. If people want a lot more software — if demand is elastic — then then will mean coders just get paid more without losing their jobs. But if people don’t need much more software, then companies will retain some percent of their newly productive software engineer workforce and lay off the rest, because now they can do the same job with fewer people. The ones who remain will get paid more than before.
A good rule of thumb, in my experience, is that anything with David Autor’s name on it will be serious and of high quality.
Which makes sense, if you think about the physics. Burning coal to turn gears is more efficient than burning coal to generate electric current to turn those same gears.
In fact, AI engineers and founders I talk to increasingly define “AGI” as a system that’s able to replace most human white-collar workers on a 1-for-1 basis, rather than any sort of psychological or philosophical definition of “general intelligence”.
I've realized my fear is not super-intelligent AI, but AI that is 20% worse than the typical knowledge worker and 90% cheaper. This could mean interacting with a ton of crappy AIs because companies will rush to save money with cheap "good enough" AI substitutes.
The economic basis of this fear seems solid, and it is pretty technically feasible already.
I will not go as far as Acemoglu does, but I share some of his views. My main take is that there are currently unrealistic expectations about what that technology can do, which may lead to its demise. Basically, we may reach a level where the productivity gains of implementing advanced algorithms are simply too low to justify the costs of maintaining and developing the technology.
I feel that software engineers are putting themselves unwillingly in a trap by shooting for the stars, rather than betting on improving productivity first and foremost, i.e. by creating tools that complement humans, rather than attempt to replace them. I also believe labelling advanced algorithms as "AI" is a very bad idea, as it reinforces those expectations among investors, who have far less of an understanding what the technology can and cannot do.
In essence, the current technology is very good in automating predictable tasks, but very poor in improvisation and quick adaption under unforeseen circumstances. It repeats the models we have seen in supercomputers, which can beat humans by sheer brute strength, not through original thought.
Thus, my expectation is that the technology will not be used appropriately, and as a result, it will lead to a lot of wasted money. This will be not because of an inherent flaw, but due to unrealistic expectations on the investor side. It doesn't mean the technology is bad, it just seems that many will use it in an entirely wrong away. I consider 3D cinema a good analogy, as it was expected to turn the way movies are shot on its head, and in the end, it turned out that the benefits are far less than expected initially.
Also, I think it is time we stop calling those advanced algorithms "AI" and then try to come up with some contrived terms for actual AI. There is a great term for what we currently have, coming from a popular sci-fi video game series - Mass Effect. There, they call advanced algorithms "Virtual Intelligence", or simply VI, which is a catchy enough term, by the way. Essentially, ChatGPT is nothing more than an Avina, and certainly not EDI (those who have played the games will get the reference).