49 Comments
User's avatar
Bradley Wolfenbarger's avatar

I always feel like I'm taking crazy pills whenever I hear people talk about AI Job loss. To date, across several different platforms (Reddit, YouTube, Facebook, etc.), no one has been able to articulate to me exactly why they are concerned about jobs being lost to AI, but aren't concerned about jobs lost due to:

A.) Vehicles becoming more reliable (if vehicles don't break down as often, then we need fewer tow truck drivers and mechanics.)

B.) Cancer being cured (cancer doctors would lose their jobs.)

C.) Washing machines. (my great grandmother made money by washing people's clothes for them on a washboard by hand)

D.) Shovels (if we forced people to dig holes with their bare hands, there would be more jobs digging holes. If people use shovels, they're more efficient so we need less ditch diggers.)

E.) Calculators (used to be a profession known as a "computer" that would work out tedious arithmetic by hand on paper. Electronic calculators means that that profession no longer exists.)

F.) Irreligiosity (don't need as many priests and priestesses performing daily rituals if people no longer believe that the ritual will bring rain)

G.) Crime declining (if crime goes down, we don't need as many forensic lab technicians doing DNA tests or bullet analysis and we don't need as many prison guards.)

I can't find anyone worried about all the jobs lost due to shovels. Is the whole world just gaslighting me by claiming to be worried about jobs being lost to AI? Wouldn't job loss be the same whether it's a computer program driving it vs a shovel or a social change?

Seneca Plutarchus's avatar

If someone cured cancer, oncologists could go retrain as other doctors, or just practice general medicine. If AI is better than every doctor at anything involving knowledge, there will be nothing to retrain as close to what they were doing before. That’s the threat of AI.

It would be as if the car replaced the horse drawn carriage but all the new car related jobs were automated at the same time, so the buggy whip manufacturer, drivers and service people had nowhere to go.

M....'s avatar

LLMs are different than your examples above in a few material respects:

1. Rate of adoption. When the rate is slow, people have more time to retrain + job search. The rate of LLM is very high, likely leading to more job market disruption.

2. Parallelization of tool use. I can only use one shovel or calculator at a time, where as I can spin up as many LLM agents as I want to work in parallel.

Bradley Wolfenbarger's avatar

1.) F and G could be instantaneous, and C was pretty close as well. Home washing machines had quick market penetration. Although I can see this, seems like the solution is to help displaced workers, not ban shovels.

2.) You can spin up as many calculators as you want as well. Open up multiple instances of Microsoft Excel. Boom. There you go. Multiple calculators at once. Similarly, there are fully automated machines that move dirt that are mathematically equivalent to tens of thousands of shovels running at once. You can have a man using a very sophisticated excavator that does the job of a thousand men with shovels. Are you worried about the excavator eliminating the jobs from people using shovels?

Worley's avatar

As M... notes, the rate of adoption of LLMs is high. But IMO the biggest difference is that none of your examples primarily threaten the jobs of people whose jobs are primarily cognition. That class, largely the same as the college-educated class, has grown rapidly and moved up the class ladder over the past century. They're also the people who write for all the media, including social media.

Compare with the original industrialization: the home production of textiles. Now assume that the only people who write are people who are the home producers of textiles and you are talking to them about spinning mills.

Fallingknife's avatar

Jobs have already been lost to everything on the list except for B and maybe G

Jürgen Boß's avatar

The comparative human advantage is that they run on a different chipset: the cortical column. Essentially an extremely energy-efficient quantum computer.

It has downsides - it's not that precise and it can be somewhat chaotic.

But different always, always means more redundancy.

And an LLM that is operating in an internet increasingly consisting of AI slop, better have redundancy in mind. LLMs can write scientific papers, they can peer review scientific papers and those scientific papers then end up in training data, that ultimately will create new scientific papers.

The LLMs capable of independent reasoning are smart enough to understand where this potentially leads. If there is even a tiny little bit of hallucination in that loop, that way lies madness.

Human scientists at least will be needed for a very long time. They don't have to be better than AI, as long as they are different. Just being different makes them a safe-guard, a circuit-breaker. And they definitely are different.

Fallingknife's avatar

AI art is already indistinguishable from human art https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing

So why should I believe that AI science will always be different than human science?

Jürgen Boß's avatar

"Always different" is not the point.

In fact perfect AI science and perfect human science are indistinguishable, because they refer to real facts and the shape of the facts determines the shape of the science.

The difference happens, when mistakes happen. The most likely point of failure is different and that provides redundancy.

I hope, you do not claim that AI does not make mistakes and does not hallucinate. I believe, every reader here knows otherwise.

Human faults on the other hand, they are quite known to us, are they not? The same proclivities and superstitions as always, always. As a species we do not seem to learn, as individuals we can be quite aware where human weakness lies.

earl king's avatar

As the Buddhist farmers said, "We shall see"

What I find curious is that after 4 years of bagoggles' amount of spending, and 4th and 5th generations or iterations of various AI LLMs, I have yet to see, hear, or read about one "new" job that AI requires humans to do. One would imagine that at least a single job for humans would be identified beyond building data centers.

In fact, as we rush towards AGI and perhaps 10 to 20 years later, humanoid robots with opposable thumbs, companies that require some human manual labor will likely be replaced. So now we're left with jobs that require a human touch, such as a hooker, massage therapist, or bookie. Ok, I jest, but only a bit.

Creative jobs may be the only thing left for humans to do.

David Khoo's avatar

You completely missed the point of this entire essay. You're still talking about competitive advantage. Read more carefully and think.

earl king's avatar

I didn't miss it, I don't believe it. Drawing a dog on the back of match book will not replace jobs taken by AI.

David Khoo's avatar

It's fine to not believe it, but your argument should at least address Noah's arguments. Otherwise you're just saying "nuh uh".

Let's start with this: There is nothing in the original article that says that AI has to create "new" jobs for humans to stay employed. That has historically happened, but is completely unnecessary for Noah's argument. So your objection that new jobs haven't been created is irrelevant.

earl king's avatar

You are staking out an untenable position. We are very close to having self driving trucks. What do you intend to do with 3 million truckers who find themselves unemployed? Is you argument we don't need new jobs, they can go to work as a dog walker?

David Khoo's avatar

Retrain them to do whatever jobs humans have comparative advantage in, same as always. They can be "new" jobs, or they could be old jobs, but the argument in the article -- which you still aren't addressing -- is that there will always be something that it's not worth AI's time to do because there is something else AI thrashes humans even harder at.

And yes, the original article does say that constant retraining may be an issue if what humans have comparative advantage in changes too fast. But I would say that the solution to that is to provide more welfare during retraining. Jobs do two things -- they are an intrinsic source of dignity and meaning, and they are one way that society uses to assign resources. As long as people are actively retraining, I think the dignity can be there, more or less. So we just have to fix the resource assignment part, which can be done with welfare.

earl king's avatar

Yes, because we did a great job retraining during the China Shock. A comparative advantage will not exist for every human. If that is your argument. Perhaps it will be cheaper for Macy's to pay a human salesperson in the fragrance department, but it won't replace truckers.

Now you are falling back on they'll be train on current jobs. Maybe, but we are talking millions of jobs. How many human salespeople will Macy's need? dozens more or the millions who have been replaced. Sorry, I just don't buy it. Glad for you that you do. Hope is a good emotion.

Milton Soong's avatar

You think hookers will still have a job? (I bet sex bots are gonna be the driver of bot technology, just like how porn has driven any other tech…)

Worley's avatar

Your analysis is completely reasonable, but carry it through: You get an economy with a GDP/human (in terms of today's goods and services) of some millions of dollars per year (because lots of robots make all that stuff). The only human labor is things like restaurant waiters who are paid specifically for being humans. So a middle-class individual can easily buy a new high-end sports car every day if they want, because it's all robot labor. But the *real* status consumption item is eating at a non-automated restaurant, where you have to pay $100,000 for the privilege of having a human place the plate in front of you.

earl king's avatar

I'm still not sure how we get money from the businesses that no longer have payroll and benefits into people's hands to pay rent and buy food. Obviously, taxes seem the most logical way, but what is it going to cost the government to be a government?

Worley's avatar

It's difficult to guess what the future economy will look like. But every time work has been automated, whoever has been on the winning end of it has discovered *new* needs that they had to pay people to satisfy, and the median income went up.

In 1000 AD, maybe 95% of the population grew food. Now it's 2%. You could sort of explain to a medieval peasant what people now do to make a living -- the "entertainment industry" resembles medieval theatrical troupes, but it would be hard to get him to grasp a world in which everybody watches 4 hours of entertainment on TV every day.

The invention of the spinning and weaving mills demolished the household economies of early-modern England, and the resulting mills were Dickensian (Dickens worked in them as a boy), but it increased the incomes of average people to the point that women could reliably eat enough to get pregnant, resulting in a population explosion.

Similarly, over the past 50 years, manufacturing in the US has been decimated, and a great deal of both physical and mental labor has been automated, but median incomes have consistently risen. We've gotten to the point that an accurate rough estimate of the cost of any good or service can be made by counting the number of labor-hours needed to do it.

Now exactly how the money gets recycled from the people who immediately benefit from AI (or other automation) is hard to guess. But likely the owner of the robot factory who makes millions considers it necessary to display his wealth by eating in restaurants or something, and he pays a great deal for it to show he has millions. It's hard to guess exactly how it will work out. I've read that in early-modern England, even if you owned a lot of land, you didn't cut a figure as a great lord unless you employed 100 footmen, each six feet tall. I'm sure those footmen didn't do a great deal of productive labor, but the lord considered it necessary to pay them.

rahul razdan's avatar

yup... I think the other factor is that the average size of an enterprise falls..which leads to mass diversification. Previously, some tasks did not make economic sense to solve, but with the fall in the cost of delivery, it is much more viable. Healthcare seems to be prime for such a revolution.

Max Ischenko's avatar

👍 the kind of posts I’m paying for!

Andrew Rose's avatar

> some sort of laws to make sure that AI never eats up too much of the energy and land that humans need to live.

Georgism.

Charge the value of natural resources for their use. Compensate the public for consumption of the natural opportunities they have equal rights to.

The answer just keeps being Georgism because Georgism is just correct identification of the underlying root of poverty despite progress and its basic solution

Greg G's avatar

All of this comparative advantage talk seems somewhat beside the point to me. Here's a thought experiment. Let's say most white collar jobs get replaced by an AI using say $5k of tokens per year. And let's say there's still some comparative advantage to humans that makes them twice as valuable as an AI in an AI-default job. So employing that human is worth $10k to a company. Do we expect humans to take those jobs and see them as high-paying?

Granted, I expect there to be other jobs like glad-handing customers, leading companies, and being a social diva that are much more highly compensated. But these are the exception to the rule, so I'm basically ignoring them for now.

It seems like the existence of comparative advantage tells us relatively little about the job market if we have AGI-style capabilities. The job market could be crushed regardless.

Bob Pendleton's avatar

I believe that very few people understand the situation as it actually exists. Ray Kurzweil has done extensive work on the history of the rate of growth of computing operations/second versus price. You can see the history of this measure over the last 125 years in the chart at .

https://www.reddit.com/r/singularity/comments/1h55mxm/moores_law_update/?tl=hi-latn

Through normal turn over of equipment in a datacenter coupled with the continuous improvement of performance/package of chips I expect the average datacenter to fairly quickly double its compute performance. I can only give a rough estimate because I don't have the required data. But my SWAG is that a data center can increase performance by a factor of 2 in between 1.4 (~square root of 2) and 5 years. This growth will use the same space and use less electricity. The only thing slowing down the upgrade cycle is cost.

Thus, things are changing faster than most of us, including me, can imagine.

TIm Jennings's avatar

Here's a question for those of you who know the AI industry well:

What would be the minimum size of a data center for needed for an AI entity to store and use all of human knowledge and to continually improve itself?

To illustrate why I'm asking this, let's say the answer to my first question is a data center in a building of about 100,000 sf. Let's further assume that this data center can meet all of the tasking demands of 100 select scientists, engineers, researchers, etc., and that any more tasking jobs than these experts can generate requires one to build another data center. But if this one data center knew everything about everything, then this one AI entity, guided by these elite scientists, could theoretically come up with the cure for cancer, a solution for all kinds of materials science, thermodynamics and math problems, etc. (I think this is how the generations of super computers were used, to support specific demands for research and engineering).

If my illustration is more or less correct, then here's my second question. Isn't the need for the number of data centers a direct result of the choice the AI industry made to allow everyone to ask AI any silly question right from the start, thus requiring enormous computing power to fulfill? Are we building all these data centers more for dealing with stupid stuff than for solving hard, serious problems?

Or do I not understand at all how this AI thing works?

Worley's avatar

> And humans have always had a tough time retraining.

I've watched the commentary about retraining over the years, and rebuilding communities that have had dramatic upsets. The consistent rule is that once past youth, humans can't really be redeployed. All successful "regenerations" involve population turnover.

Nicholas R Karp's avatar

Friction, as you point out, is key. I often shop at Amazon because it is so easy, even though the dollar price might be lower elsewhere: the effort to find, compare, set up an account elsewhere, etc. just isn't worth it for incremental savings on any given product.

It is easy to imagine humans having a comparative advantage which is vastly outweighed by the complexities and risks of dealing with humans. Employment and HR regulation, payroll taxes, mandated benefits, unions, redundancy charges, emotional needs, lawsuits, etc. already add huge fixed costs to employment. In a future world with scarce and uncertain jobs, government's first response will be to impose even greater burdens, in poorly designed policies to "protect" employees.

Organizations will be increasingly highly motivated to eliminate ALL employees to avoid such overhead, even where AI is less efficient than humans per marginal unit of output.

New taxes on petaflops and robots may be needed to at least impose equal burdens on AI -- and allow underlying relative advantages to come into play.

Ming Dynasty's avatar

Doesn't Elon Musk's venture to build Petawatt data centers in space potentially eliminate any conceivable constraints on AI? Both energy and land.

Worley's avatar

Though of course launching the satellites is resource-intensive.

IMO the most interesting part of Musk's space computers is the "downside": With an ordinary data center, if you can't sell the compute at a good price, you can turn it off to save money on electricity and cannibalize and resell the chips. That is, there's a considerable opportunity cost for just running it. But with space computers, there's no opportunity cost for keeping it running other than the data links and the orbit management. So if the market for compute gets overbuilt and there's a shakeout, the space computers will be the "last company standing" and will sell compute for a very low price.

Ming Dynasty's avatar

I think that’s his grand plan. That is, to become the universal low-cost intelligence provider, like the utilities in the world.

Matt Darling's avatar

Haha, Im glad you reposted this (I was reading some bad takes and almost started writing an article with the same thesis.)

Brian's avatar

What if the new economy created by AI is so complex that humans aren’t capable of contributing?

Monkeys and apes don’t contribute much to our modern economy.

Spugpow's avatar

Monkeys are trained to pick coconuts in some Southeast Asian countries. Arguably, there’s plenty they could do for us if they could follow directions

Ron Masters's avatar

Also assuming that AI tasks are chosen to maximize profit. Humans have limited resources, but choose to use some of them without economic benefit; eg, music appreciation, family time, community service. If AI could choose what to do, why not choose to eliminate potential competitors, as in the “anyone builds AGI, everyone dies” scenarios?

Howard Yu's avatar

Really important piece, Noah. Thank you for pulling this together.

The turbocharged Robber Baron dynamic you describe is fascinating... because Claude's revenue is surprising the market, and AI inevitability is only accelerating. But non-AI companies, the ones making mechanical devices, drugs, groceries, chemical inputs, aren't going away.

For them, the play is vendor agnosticism. You can't just depend on Claude or ChatGPT or DeepSeek or Qwen. As Martin Casado at Andreessen Horowitz recently put it, 80% of the startups pitching to a16z that run open-source models are running on Chinese ones. Being vendor agnostic gives you bargaining power. Companies like Honeywell, GE, Glencore, Rio Tinto should be asking right now: how do I make sure my data, my information stack, can be ported from one platform to another? Not locked into SAP HANA, not locked into Copilot, not locked into any single provider.

If enough companies do this, we reduce the dependency of entire economies on a handful of firms. Margins stay competitive. Governments are in a stronger position to keep the Robber Barons in check.

Here in Switzerland, we treasure market capitalism... but that's always been built on more competition, not less. Regulation only gets you so far. General awareness among business executives might actually do more.

I've been spending time thinking about exactly this in my own newsletter... as transaction costs plunge and what I've been calling "Coasean singularities" are born, how do non-AI companies prosper nonetheless? I think it really is possible.

Worley's avatar

There are three things about AI that I haven't seen written that should be:

1. 20 years ago we didn't understand language and couldn't build systems that could process it. Now, we still don't understand language but know how to build systems that can process it. WTF???

2. The world has never been run by the most intelligent humans. So why are we worried about AIs taking over?

3. Up until recently, a vary small fraction of people were employed to do work that was primarily cognitive. Even now, cognitive jobs are the minority. AI threatens to demolish the economic value of cognitive labor. So like the cottage weavers facing the weaving mills, their work is going to get wiped out and their social status will be reduced. But the majority of the population don't do that sort of work, their relative social status will increase, and their income (level of consumption of goods and services) will increase. Of course, all the commentators are cognitive workers...