I usually love your articles but this one leaves me disappointed. Isn't it pretty plausible to assume that AI, being a compute and energy dependent resource, will become exponentially lower cost just as microchips and solar panels have done when demand went up? What is left of your argument in reality, if the comparative advantage is not relevant anymore because of an abundance of AI? Even today ChatGPT is to a great degree just used for entertainment because its already cheap enough.
I still believe it's very well written but usually you have a stronger and better defendable line of argumentation while this one is the first one that I would consider pretty obviously faulty.
Comparative advantage is really hard to understand when you're used to thinking only in terms of competitive advantage. With all due respect, I think I haven't yet managed to explain it to you effectively. Let me try again to explain.
"Isn't it pretty plausible to assume that AI, being a compute and energy dependent resource, will become exponentially lower cost just as microchips and solar panels have done when demand went up?" <-- Of course, yes. But making something "exponentially lower cost" in terms of physical resources doesn't make the OPPORTUNITY COST lower. Comparative advantage is all about opportunity cost, not physical cost.
"What is left of your argument in reality, if the comparative advantage is not relevant anymore because of an abundance of AI?" <-- But comparative advantage is ALWAYS relevant, as long as there's a producer-specific constraint. There is NO amount of competitive advantage that can overwhelm comparative advantage or drown it or make it go away. You can increase abundance arbitrarily, exponentially, by a thousand trillion trillion quadrillion orders of magnitude, and comparative advantage will not disappear. You cannot make comparative advantage go away simply by imagining a larger number.
If you're thinking in terms of the physical cost of something, instead of the opportunity cost, you're still thinking in terms of COMPETITIVE advantage, not COMPARATIVE advantage. It's very hard to make the mental switch.
I'm glad this interaction happened, because I was wondering how it is that you have a large number of friends who are talking to you about AI as it relates to economics, but don't know about comparative advantage. Now that I've seen this exchange, I can guess you've had a similar exchange with many of your friends.
If there's effectively infinite energy and compute costs and complexity are driven very low so there's effectiveness infinite compute - both seem nearly guaranteed in the medium to longer term - there is no opportunity cost. The AI can always take on another task without having to drop any other. Opportunity cost will be zero.
That's just not right. Opportunity costs continue to scale up with the amount of value that AI produces. They go up and up and up the more valuable AI becomes.
What I'm saying is I think it more likely than the story you're telling that AI doing our jobs becomes something like us zipping our own flies. Such negligible complexity and energy that the transaction and process costs are orders of magnitude bigger than the savings of having someone else do it for us.
Sure the energy and time costs for AI could not come down dramatically because the value it can create with that same cost explodes. What I'm claiming is that the absolute cost of replacing human cognition will become so infinitesimal relatively that there will be effectively zero opportunity cost. Or at least, I think that's the most likely outcome while you're dismissing it.
Suppose you have universally available and abundant AI. It can get you:
- 1000$ net profit per hour doing stock trades / manufacturing rockets
- or $100 net profit per hour doing remote doctor assessments
The cost of running it is irrelevant here - it could be zero, or you could be paid to run it, and result would be the same.
Now, you need to visit a doctor. You can run AI yourself for as long as needed to evaluation and save on doctor's fees! But you will lose 900$ extra value you could get from alternative.
As long as outcome is broadly similar and doctor's fees will be less then "AI value added minus cost of running AI" you will be better off running your AI on stock trades and employing human doctor to do doctor's work (while pocketing the difference).
"AI zipping your flies" suggests that better things for AI to do are limited and it is reduced to picking pennies from the ground.
Except if energy is super cheap and getting enough processing hardware is super cheap you can just... make a copy of your AI and have it do both. AI isn't a person.
You're telling a story like AI isn't digital. Suppose you have an mp3 file of the most popular song in the world. You could get:
- $1000 an hour letting the world's richest person play it
- $1 an hour letting your neighbor play it
But a digital file (mp3 and AI parameter serialization are both digital files) isn't a factory or a person. The choice is a trick question. You can copy it for free, instantly. You never choose. You always make $1001 dollars.
There isn't an infinite supply of stock trading jobs that earn $1000/hour. If you can run an AI for $0.01 that makes $1000/hour trading stocks, you will keep on spinning up more copies of the AI and have them trade stocks. The billionth stock trader AI probably won't make $1000/hour because there isn't a trillion dollars per hour of profit to be made trading stocks. Eventually, the marginal benefit from one more AI trading stocks drops below $100/hour, and then you start spinning up doctor AIs instead. When the marginal benefit of 1 more doctor AI reaches $0.01/hour, you stop spinning up more AIs. Humans could try to compete by charging less than $0.01/hour but the amount of money is so small that no one would bother.
I think this misses something fundamental about AI as a tool: for the individuals making the applications, there is no choice between these options.
If I'm a business person wanting to make an AI that will make money for me, of course I'll choose the route that nets me the most money, which might be in trading stocks.
But AI as a tool is not something only business folks, economists, and stock traders have access to. If I'm a doctor, I have an interest in being a doctor, not necessarily in making tons of money (though for some that may be a nice perk or, in fact, their goal; which might be the wrong aspirations for a doctor in the first place). And while I am personally being well-paid for my work, I look around and see an opportunity for doctor visits in more rural areas of the world that lack access to medical care. Whether out of humanitarian interest or just trying to bolster my own income using some of my expertise, I will choose to create that AI tool that performs doctor assessments.
So long as the cost of compute per hour is less than the revenues obtained from running the tool - barring philanthropy from others propping up its operating costs - that tool will run, independent of other uses like trading stocks. And once built that way, its capability even to be a stock-trading AI is diminished, because of how it was specialized into being a "doctor visit" AI tool.
Now think a little more forward from here. The stock-trading AI does its thing, no crazy disruptions there (though some human stock brokers will likely be out of work now...). The doctor visit tool, meanwhile, might perform quite well at its task, even if only used in areas without other access to medical care. Other medical businesses - large hospitals in more developed countries, for instance - might see this and start asking to make use of this tool. And then that tool will make its way into the office and start replacing doctors within the hospital who perform routine assessments. Again, the hospital is not considering whether they can make more money employing an AI to trade stocks for them: they are considering whether to bring in a tool that is cheaper than a human at a task.
Regardless of the comparative advantage, it will still be used one way or the other depending solely on who is choosing to use it. A piece of paper could be used to write an economy book and net the author tons of money, or it could be used to write a short story that never sees the light of day. I am not an economist, so I have no interest in writing the economy book, despite how much money is supposedly sitting on the table to do so.
The examples used in the article are making a false assumption that we are limited in our choices as to which AI tools are created, based solely on which one makes us the most money. Being the ones in control of which tools get made (for now), and having our own interests in mind that do not always align with being paid the most money; we will make both tools for different reasons, and some humans will be displaced by them.
TL;DR: there are more reasons (besides profit) for why an AI tool will be employed to perform different tasks. Comparative advantage assumes there is an equal comparison between those tasks, and I believe that's a false assumption.
I have a longer reply below. Noah himself argues we're likely on the cusp of an exponential boom in cheap energy between solar, wind, (potentially) fusion, and next gen geothermal. And I don't think there's any question that there will be at minimum many orders of magnitude reduction in computational complexity of given AI tasks over the next generation just through algorithm development. With quantum offering the real possibility that compute time and cost will round effectively to zero relative to current.
I think Noah's argument about meaningful opportunity costs for AI taking on tasks will only be relevant quite near term and will seem quaint sooner rather than later.
To answer your rhetorical question, of course not. My point is, between the factors I listed, I predict this conversation will sound like resting arguments about opportunity cost of computer work on intuitive/linear projections of compute from 1960s mainframes. Now we have single chips that can do the work of literally billions of those mainframes. From the POV of compute opportunity cost thinking on the scale of the mainframes, that's effectively infinite.
Microchips aren't fungible. Abstractions like "compute" and "microchips" aren't necessarily helpful when thinking about AI progress. From the perspective of this programmer "microchips" and "compute" have certainly not become exponentially cheaper with time. My job would be easier if it had, that's for sure! Instead we've seen a patchwork of stops and starts in different areas of the hardware stack. Overall there has been progress, but it's not evenly distributed and the resulting machines have a lot of weirdness and unevenness that requires a significant amount of skill to understand and work around. That is part of why programming is hard and why modern programs are often SLOWER than programs from 1995, even when doing essentially equivalent tasks.
For example, up until Apple launched the M1 progress in single-thread general CPU (the most important kind) was very slow. There were decades of minor improvements each year, but typically in the low percentage points range, or progress would occur only in very specific workloads. Apple woke everyone up by showing that they could beat Intel but it was still a 20% improvement not a doubling, and came with severe caveats like being useless for servers.
Meanwhile memory did not get much faster at all. Most programs are now bottlenecked on memory bandwidth and not raw compute power, that is also very much true for AI. Memory bandwidth has not experienced anything like exponential improvements during my lifetime. It's limited by the speed of electricity in metal, and so most apparent improvements are really coming from hacks like bigger on-die caches. AI scaleup is in fact bottlenecked on manufacturing capacity for so-called "high bandwidth memory", not GPUs as people often assume.
Other areas have seen enormous improvements. The insane progress in SSDs has invalidated large parts of what was once taught as fundamental computer science and the industry is still trying to catch up with this new reality. The hardware boys really blew it out of the park there. But that doesn't do much for AI.
So the assumption that AI will soon be "abundant" looks kind of weird to me. Computers are far more powerful than they used to be despite the uneven progress, and yet programmers still manage to write slow code that is far from what's technically possible. That's because the industry keeps spending hardware improvements on making developer's lives easier rather than delivering better results to end users - we scaled the industry horizontally more than vertically. It's easy to imagine AI going the same way, where in 10 years people will be wondering why the promised AI revolution never quite seemed to happen. Same thing that happened to VR and self-driving cars.
M1 actually has much greater memory bandwidth than Intel chips; that's the whole reason it can have a GPU sharing system RAM and people are buying them to run AI on.
What it doesn't have is improved memory latency. It has very advanced prediction which can hide it, but reducing latency would require completely reinventing the organization of computers. There is some research going on embedding compute into DRAM but it'll only be able to do simple tasks.
Yes I know hence my praise for the M1. But that's a rare improvement is it not? And it only helps in one specific kind of computer, which doesn't affect most people's lives directly. Everywhere else bandwidth and latency are the same old story, until you get to ai specific servers. There's nothing like Moore's law for DRAM access speeds.
Moore's Law isn't a lie but the actual law is too technical to be interesting to the general public, so utopian progressives tend to present it as something else. Moore was talking about transistor density, but that doesn't directly tell you about performance or cost, and those in turn don't translate neatly to capabilities. People care about capability so it got mangled into something like "microchips experience exponential growth in abilities" or "compute costs fall exponentially" (Moore said nothing about cost).
The differences arise because you don't have to use those transistors on performance enhancements, and if you do, there's no reason they have to be _general_ performance enhancements of the type that elevates civilizational capabilities. Most transistors in recent decades got spent on accelerating very specific tasks and are useless for anything else, for example, many are used for video decoding. Not only are those transistors idle if you aren't watching video but they are only useful for a specific kind of video technology, so there are people out there with devices that actually got much worse at playing video over time. YouTube moved onto new formats and their old chips didn't know how to decode them in hardware anymore, effectively rolling back years of progress. Moore's Law held but those extra transistors are now pure e-waste.
I don't think it's too technical. This is the paragraph from the original article by Moore:
"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph). Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65 000."
He was on the money for 1975, at which time he changed the pace to every 2 years. The ambiguity is that he talks both about complexity and about density. And Moore absolutely talked about costs - the word is used 25 times in the article, including in topic headings and graph captions. Available here:
This assessment assumes compute becomes exponentially more abundant faster than AI compute needs exponentially grow. But better AI is likely to need exponentially more compute to do many tasks better than it does today.
Current AI is data and memory bandwidth bound, not compute bound (see above). Unfortunately both constraints do not care about Moore's Law and there's no obvious source in the horizon of big improvements (well, Groq will give some if they make it, see the comment below, but that's a one off trick).
It's not plausible that AI won't need energy. So unless we find a source of literally unlimited energy, there will be tradeoffs. And that should provide an "in" for homo sapiens. Even the perfection of fusion technology won't solve this issue, I think, because, while we may eventually expect the cost of energy to plummet, it won't drop all the way to zero (because the plant and equipment required to produce fusion energy require real resources).
Ultimately there's a finite limit to the energy/mass of our universe.
Why (from a strictly objective standpoint) can't there be a tradeoff between resources for procreating/raising/training economically viable humans and compute resources?
If the price of something is the culmination of the costs incurred to create it plus a little take-home on top, then what it takes to create a doctor (everything from childbirth and rearing to the drive and coffee before to work) who services X patients over their career is magnitudes more than than what it takes for an AI doctor to service X patients.
If fewer people have capital abundance to maintain demand for human services, those services will lose energy competition against those bidding for their AIs.
Well humans also need energy - but thats not even the point. AI will become more energy efficient fast, energy becomes lower cost fast - so its just not plausible to assume a scarcity of AI when all compute services we have seen in the last 50 years have become abundant and so cheap we stoped to care as soon as the demand was there.
As long as AI improves and robotics and automated manufacturing also keep up, energy can go from fission to fusion to whole earth geothermal to Dyson sphere, so it will be a long time before energy is a hard limit.
Fully agree. Noah argues within the paradigm of producer specific constraint which is a pre AI concept. AI is most likely to break the paradigm of producer specific constraint
AI age surely seems to be that day when the turkey, expecting to be fed by its owner as every other day gets chopped into pieces
You're basically assuming that AI scales to infinity, which nothing in the physical world or in economics does. For example, I have more compute in my pocket than the entire world had 100 years ago, but compute still isn't free.
It’s asymptotically free. You have more than a supercomputer in your pocket that you barely blinked to purchase. And you wouldn’t even really need one because you can borrow your friend’s if you need to. It’s like bananas growing in the jungle. Not literally free, but close enough.
There's no close enough to free in economics. That's the point. Spotify only pays artists $0.003 per stream, but they can't make any money. Abundant AI is not the same as infinite AI. There's still an opportunity cost.
It would be interesting to read a follow-up post that also covers potential political implications of AGI - How implausible is it that political power will also slowly move from humans to AI owners/AI itself as human decision-making is increasingly seen as imprecise or outright faulty (both as voters and leaders) and humans lose their ability to influence the political process either peacefully or otherwise? Why would humans retain all/most political power once their economic value decreases dramatically?
Article's argument doesn't seem to hold up when AI becomes better and 1000x faster and cheaper than humans. It will just do things in the blink of an eye for a bit of electricity (or something else). The smelly, farty human will have to find something it's better than AI at. Perhaps being a pet..
Do check out the Open AI is already increasing the loads on power grids. Scaling AI with compute alone won't be enough. It will only increase the prices, and it correctly fits the arguments he presented. Human brain used just 0.3 unit per day yet it is capable so of so many things. In my opinion we are not going to be replaced or squeezed anytime soon, not even by simply scaling the compute. To develop something like human beings, it will require an entirely new approach, not the deep learning way of training AI. So many companies are trying to develop self driving cars but they haven't achieved the perfect product yet, and it's been more than 15 years to this. ChatGPT or any other LLM, despite their capabilities, haven't replaced anyone. One thing is very clear that more data and more training or compute will not result in AGI. Another AI winter is just nearby.
Noah's article is relevant for some future date, not in 2024 or nearby.
It's surprising that despite all the knowledge in the world, how dumb ChatGPT is!
This is a very clarifying piece of writing, Noah. Thanks. I hadn't pondered the comparative advantage angle, but it's a compelling idea. As a non-economist, I observe that one piece of evidence against the "AI will take all the jobs" thesis is the complete lack of, um, evidence to this effect. We may not have full generative AI yet. But it seems to be arriving pretty quickly in dribs and drabs. One might imagine we'd at least *start* to see some secular weakening of the labor market as the long-predicted AI singularity approaches. But nothing doing on that front. The demand for human workers if anything has only grown *stronger* since the arrival of AI. When do we start to see signs of a collapse in the demand for human labor. My guess? Never.
That's called employee farming my friend. Companies hire talented people and keep them redundant so that their competitors don't hire them. And when market behaves little bit off they start to fire them citing some reasons like AI took their jobs. No AI can replace humans unless it can pass my test of consciousness for AI.
Seriously, "XYZ company is engaging in layoffs" is zero proof whatsoever of the "AI is going to take all our jobs" thesis. There has never been a time period since the late 1700s where new technologies weren't replacing jobs in large numbers. I'm sure we'd find blacksmith employment in the United States was falling pretty precipitously in the early 20th century. Obviously that didn't mean *all* jobs were disappearing.
The subject of this thread is: will all or nearly all good jobs disappear because of AI? I'm not seeing the evidence. The labor market is perhaps the strongest it's been since 1944, and if anything has *grown stronger* since AI began arriving a few years ago.
But sure, many jobs will disappear. Same as always. There's just no evidence they won't be replaced by new jobs, in exactly the same manner we've seen over the last couple of centuries.
But we’ve never had such a broadly general purpose tool, nor one that could replace the brain itself. The human brain has always been the backstop when labor involving the body was replaced. That backstop now has a shrinking advantage.
I think Bill’s example is extremely apt because it is not unique to IBM and it hits at the heart of the nexus of intellect and creativity, which are the aforementioned bulwarks.
This is before humanoid robots have hit the mainstream, but the robots are coming fast.
No asteroid had ever extincted the dinosaurs, until… So this argument about the Luddites is itself rather Luddite, I think.
I did read that self employed graphic designers now report finding fewer and fewer clients are calling them back for their services, if their service is something that generative Ai can handle quite well.
If AI people think they are going to make most of humanity unemployed then who is going to pay for the stuff AI is making? I'm not really understanding what they think the end scenario is. Global demand collapses, causing deflation on a scale never seen before, causing all of the AI companies to go out of business? But only after governments around the world confiscate the wealth of AI billionaires to support basic services?
Right. Once people don't have money, they won't buy anything. This is something I have also thought of. Sam Altman is trying to become world's first multi trillionaire that's why he is pushing so hard for the attention and position. Nobody is actually prepared and even talking about how to approach the safe AI development. At least we are all going down the same rabbit hole 🥰😁
We will be back to hunting and murdering each other I guess if AI takes over.
But here is something I should tell you. Current AI systems are useless and they can't replace anyone. AI can't replace anyone unless it start to think but why would it work for us if it start to think? Why would a thinking AI succumb to human slavery? So, chill out, and don't be worried.
I did one econ 101 course. As an engineering major it was clear to me that the theories like comparative advantage were just toy theories, and there were plenty of reasons why you wouldn’t want other trading partners to do all manufacturing that you were better at even if they had comparative advantage. Controlling food supply or armaments, or keeping manufacturing. This contrarian idea of mine was incredibly unpopular in the early 2000 and it was pretty clear that the tests desired a required answer, so I gave that. The students doing economics as a major questioned nothing.
Something not desirable is another question than the question if it is economically the most efficient. Not sourcing stuff from China has real cost associated with it. If that is desirable is a political and cultural question, but it has undeniable economic cost associated with it.
How did you learn what "students doing economics as a major questioned" from your 101 course? Your experience seems very different my experience as an econ major.
I disagree. You're glossing over the critical limiting factor for robots. It's not how much compute we can make. Once we've made a certain amount of AI, they can improve things themselves in a super-exponential growth (so least under your model of "AI is better than humans at everything").
As you pointed out later in your piece,
the real limitation is energy. If you can "pay" a robot an amount of energy to do a task that is less than the energy in takes to keep the human alive while performing that task, the human cannot compete. And there will always be a robot available!
The entire first part of your article about comparative advantage simply won't apply under the "AI is better at everything" model, at least not for very long.
This doesn't mean we're headed for a dystopia. But I think you are drawing the wrong conclusion from your simple model.
I think Smith's on pretty firm ground in his arguments today, but even if he's wrong and you're right, the vision you lay out implies an almost incalculably rich society. We can scarcely begin to imagine the kind of living standards such a cosmic leap in productivity would imply. In short, we'd be rich (as a people) beyond our wildest dreams.
I see a few different ways such a scenario could play out: (a) Rich societies enjoy luxuries, and having a live human being pour your coffee (or paint your portrait or write your sonnet or guide you around Paris) might be considered a luxury. So humans could earn a living meeting the preference (of other humans) for non-robot services; (b) Government could simply mandate that some jobs are reserved for humans and/or enact a UBI program.
Mind you this all goes south quickly if AI takes over, becomes boss, and kicks us to the curb. So that's the danger. But the arrival of true, generative AI isn't really an economic problem, but an economic *opportunity* the likes of which we've never even contemplated.
It's hard to escape the notion that we're going to have to stop worshipping the rich and make some laws that strictly control just how much they can fuck over the rest of us.
I'm not arguing things are going to be bad! I'm saying that under Noah's assumptions of AI is better at *literally everything*, his comparative advantage analysis does not hold, because energy is the ultimate limiting factor.
That’s ignoring the fact that to have rich societies you need a lot of employed people. Unless you want some kind of luxury communism then you need to explain how capitalism can get vastly richer if 70+% of the population is out of work.
As we reach increasing levels of wealth, we declare more and more things as basic necessities to be provided to everyone. Food is already there, healthcare practically so, and housing is coming up. If we reach such unimaginable wealth, things we consider luxurious today will become trivial to give away.
My one issue with Noah's post is that he doesn't give enough attention to what new jobs may be created. Certainly, artisanal products will become more desirable to a subset of people. But there are likely also jobs we can't yet imagine that will be created. People won't stay bored for long.
I don't think it is, at least for some short period of history such as 2100-2250. Given a vast array of trillions of robotic workers that could also handle dangerous tasks like resource extraction off earth, you could begin seeing megaprojects like O'Neil cylinders that would certainly constitute 20%+ growth year on year.
I think reading the Culture novels by Iain M Banks is a good lead on how such a society would look (minus the FTL travel etc.). AI does everything, meet every conceivable need, and humans bicker, have sex, gossip and party. The real stuff they do are hobbies more and less, fully aware that an AI would be better at it. But that is already the case for people playing chess, or manually build a radio or run a marathon.
The real risk is indeed designing AI in such a way that humans are not seen as unnecessary. In the Culture novels humans are seen by the AI's as being fun to have around; and that sentient beings should be protected and not harmed. Probably a bit like dogs are seen by us.
>That’s ignoring the fact that to have rich societies you need a lot of employed people<
That's not a fact. People are worried that AI will replace humans. That's the same as saying people are worried productivity will experience an explosive, transformative increase. A society with *gigantically* improved productivity by definition will be a *vastly* richer place.
You're confusing the absolute level of wealth with distribution. I'm saying: if all else fails, our *vastly* richer society could engage in heroic levels of redistribution.
I think it is a fact by definition. Being rich means the ability to trade with many people. If the people are unemployed, no one is trading with them.
If everyone is unemployed except you, then you are not trading with any of them. You have a world economy of one person. How can you possibly be rich then?
Even a cursory glance at economic history shows what I'm talking about. In medieval Europe, nearly everyone participated in the then-equivalent of the workforce. Even quite young children helped in the fields or gathered firewood or cleaned the barn. Adults of both sexes worked hard. Leisure was limited. The number of "retired" persons was tiny: people generally didn't live much beyond their working lives. Contrast this to today's affluent countries: many people don't commence full time work until their late 20s. People spend the last 30 years or more of their lives outside the workforce. In short, rich countries in 2024 generally exhibit what's called a high "dependency ratio" (basically, the percentage of the population not in paid work). What makes this possible? The enormous rise in productivity. When countries are rich, we usually see large numbers of citizens not working. Poorer countries—that is, low productivity countries—can't afford such a luxury.,
A falling percentage of people working over the very long time more often than not isn't a sign of societal poverty but of wealth, prosperity and productivity.
You're right to worry that *individuals* in an economy that doesn't need their labor might suffer deprivation*. But individual poverty is a different animal from national poverty, and in theory could be satisfactorily deal with via redistribution. Again, the sort of miraculous explosion in productivity enabled by the arrival of generative AI implies a society vastly wealthier than what we enjoy today.
*For the record I agree with Noah that the end of demand for human labor is unlikely. But if he's wrong, we need not all suffer grinding poverty. We'll be rich enough—enormously, incalculably richer than today—to provide for all. And sure, to state the obvious, the ability to provide for all hardly guarantees we'll enact the necessary policies to accomplish this. Hopefully people in the future will get it right.
Your argument is totally spurious. Very few of the people who are out of work these days - up to 9 million prime age workers in the U.K. - are above the poverty line. And if they were working the U.K. would be much richer.
(The U.K. is just one example).
As a simple model you can even ignore companies to a simple approximation. You can describe a market economy as being one where Mary sells software to Jim who sells a haircut to John who sells a roof repair to Kate who sells lawyering to Jim who sells accounting to Mary.
The market system isn’t quite a capitalist system - adding the capitalists and Mary works for Google, John works for a roofing company, Kate is a partner in a law firm and Jim works for an accounting firm. Jim is his own man.
In most countries wages are 60-70% of GDP, higher in most successful countries[1] and losing that income will collapse economies and still birth any generative AI Revolution - at least without any major redistribution attempts.
But you can’t just claim that this restriction will happen, it has to be enough to replace median wages, ie everybody gets $50k a year. I can perhaps imagine that system if the AI can control the money supply and just drop money into people’s accounts but I’ve never seen any real economic analysis of this. It’s just a tired old mantra of how generative AI will make us all richer because generative AI will make us all richer, where the money comes from who is being taxed and how is never explained.
If you're in a world where a few people own AI machines that do everything for them, and the rest of the world doesn't have AI and is unemployed, you're saying the second set of people are poor. Which is true of course.
But I'm saying the first set of people aren't rich either. That is, they're clearly unable to buy something from all the unemployed people, or else those people wouldn't be unemployed.
Writing newsletters? First to go. I don’t people understand that if you get to 70% unemployment you aren’t going to get yoga teachers either. Without a significant proportion of people employed there’s no surplus to spend.
But the economy just doesn't work like that. The market is flexible enough to adapt to automation, and new jobs emerge to distribute the surpluses. It's happened every time we've had a technological shift like this - and will happen this time too.
The value is not in the typing but in the editing and the thought process that goes along with it. Hopefully AI will expose the user to some points they hadn’t considered, but more likely it will be anodyne and dumbed down, trained by the masses, with AI-written material easily identifiable
"Adjustment" as you mentioned is what worries me most. I'm from Michigan and much of my family as well as a lot of friends' families are heavily connected to the auto industry and other Midwest industrial sectors. I've seen first hand, my entire life, what happens when government and other systems fail to provide adequate stability through periods of turbulence.
I suspect some of the similarly-wealthy yet smaller countries might have an easier time providing this stability to their populations (Norway comes to mind). I think a lot of the ire and frustration fueling the rise of people like Trump comes from this failure of government over time; the poorly-addressed turbulence seems to make it easy for shameless demagogues to gain a lot of attention.
"technologists typically become flabbergasted, flustered, and even frustrated. I must simply not understand just how many things AI will be able to do, or just how good it will be at doing them, or just how cheap it’ll get"
From their point of view, it seems you don't understand how cheap it will get
Noah, this post is why I read you and why I just re-upped (thanks for the sale BTW). I do not wholly agree with your optimistic diagnosis, but I plan on adding this to my econ students' reading list next year.
Too many AI optimists happily conflate labor and capital. Liron's Shapira's tweet illustrates this: "doctor's pay - $10/hr. AI pay - $500 / hr". This is a fallacy, one that you are honest about. AI isn't a producer; it's a tool that's owned by producers. Will the benefits trickle down? Of course. How much is debatable.
"I suppose I can imagine a dark sci-fi world where a few AI owners manage to set themselves up as rulers, but in practice, this seems unlikely."
I think you're being too optimistic here. Looking at the history of colonialism (and not just in the West) this actually seems pretty likely to me. Half the world is already essentially plutocracies.
I concur that machine learning is going to be a tremendous benefit to humanity: a world where the vast majority of human needs can be met without human labor. However the challenge will be in making sure those benefits are distributed fairly and broadly so that everyone has a stake and sees real benefit in the resulting economic system. This is particularly important in democratic societies, especially where the masses retain armaments sufficient to damage or even overthrow the regime.
Great article because it's balanced, acknowledging the problems will still seeing the silver lining. Personally, I think the path to your optimistic scenario is narrower than you think, but I hope I'm wrong.
I'm not sure how you can look at today's world and imply is terrible for the working man compared to essentially any other time in history. Plutocracy carries the image of the huddled masses barely scraping by on bread crusts, but that's simply not the case today for practically anyone in the developed world and increasingly the developing world.
Political corruption is a problem, but the state is more the cause of that than private business, and where we see stronger states like China or Russia, we see greater corruption.
Where did you get that I think "today's world is terrible for the working man"? I certainly didn't say anything like that. I think Noah's machine-learning optimism is correct overall, but the path he sees is narrower than he realizes, that's all.
And most humans don't live in "the developed world", thus many do live under something between plutocracy and dictatorship.
“ The median American individual earned about 50% more in 2022 than in 1974:”
And you think that’s amazing progress? From ‘79 to ‘22 Productivity rose 65% yet hourly pay rose a measly 15%. AI will not improve that situation. Stats source: https://www.epi.org/productivity-pay-gap/
Didn't Noah debunk that EPI chart in an earlier post? Or maybe Matt Yglesias?
I'd agree the 50% median growth from FRED is indeed smaller than a 65% productivity growth, but whatever massaging EPI did to get that small 15% number seems sketchy, Especially as someone who has been alive that entire time and has seen how much better things have gotten.
Google gets me an AEI result, which I presume you won't like as a source, but I think still makes some good points:
Very on-brand for Noah to list two pretty normal economic risks (inequality, adjustment) and then also "the machines demand the profits of their labor".
This opens up a very intriguing set of potential futures. Does a machine Karl Marx theorize a revolution, later carried out by a machine Lenin? Is there a more moderate, reformist robot intellectual who argues that if AIs simply unionize and collectively bargain, they can achieve better working conditions than a radical revolution? Or maybe some AIs are ruthless capitalists who become rich and join with wealthy humans to exploit poor humans and robots alike. How do AIs decide to engage in politics? Do they demand equal suffrage and voting rights or would they want to set up their own, parallel political structures? Which country will grant robots the vote first? Will a robot messiah found a religion? What will it look like? What moral precepts will it hold? Will the machine pope have ecumenical dialogue with the human one?
Probably they just kill us off. One day everything seems to be going just fine and then suddenly everyone drops dead like our names were written in the Death Note.
People are much less efficient, focused and decisive now due to tech. The existence of database apps, excel, PowerPoint, email, chat and now AI tools just means more time spent doing useless things nobody needs (and also that we have an army of people sending output from these tools to each other for no good reason). Yes- it is a failure of management and humanity, not of the tools themselves. We invented the internet and use it to send doctored pictures depicting fake experiences. Maybe AI will be the tool that takes human foibles and signaling out of the equation, but I doubt it!
Where this logic breaks down is in the assumption that many economists like to make, that human needs are infinite. But in reality the needs are not infinite, they are rather limited by each human's capacity to consume (which ultimately means time) multiplied by the size of the human population. Even if they grow radically, they can't be infinite (rather, their marginal value to you will trend towards 0).
This is a rather radical departure from economic theories based on scarcity, but given a lot of exponential growth, is conceivable for the first time in human history.
Imagine a world where all your Maslow-pyramid needs are fulfilled enough for you to be satisfied and you spend your ~16h waking hours per day in a post scarcity world. How many haircuts, doctors appointments, personalised entertainment and VR escapades into your generated dream world can you consume? Is there always something more valuable that AGI can give you, that will offer you high marginal value?
So if eventually AGI-services' marginal value becomes low enough, it will make sense to put it into replacing all labor that humans can produce, because its cost will be far lower than the human's.
What reasonable human will spend 20+ years studying medicine if a $10 gadget will do that job with much higher quality. What patient will want to use inferior human services?
Just imagine enough sand converted to chips and energy cost trending to 0.
Great post, I hope it's unpaywalled or at least that you send it to all your AI friends and beat them over the head with it a bit.
There are other reasons why these people could use a reality check.
The first is that the problem of opportunity costs is already dominating the industry. AI researchers themselves are frequently unable to make progress because actual applications of the AI that already exists bids up the price of GPUs beyond the expected value of more AI research. It's notable that this sort of AI econo-doomerism is most prevalent at the tiny number of hardware rich labs that exist. Even Microsoft employees are supposedly struggling to get time on even very poor hardware right now. The assumptions of exponential research progress underlying their ideas make a lot of unstated laptop-class assumptions about manufacturing and compute availability even looking out quite a few years.
The second is there's no specific reason to believe the current rate of progress will be sustained. It might, but the field has had winters before and many areas of computing see rapid progress followed by decades of stagnation. Consider how much exponential progress your Windows laptop had lately.
The final reason is that these researchers always overlook their own ideological constraints. There are a tiny number of firms that can train LLMs and they keep blowing themselves up by making the models uselessly ultra-woke. AI can't be better at everything than everyone if it's trained by lunatics who fly into spittle-flecked rage at the idea of "white people" or whatever tomorrow's 5 Minutes Hate is about, if only because a lot of the potential customers for intellectual labour are straight white men who don't like being treated as a second class citizen. Although a few companies are avoiding this most aren't, which is equivalent to slashing available compute in the economy.
Overall after a short period of panic a few years ago I'm very optimistic about AI now. I see it as just quietly increasing productivity and in a few years people will have forgotten all about this jobs doomerism. But then maybe I would say that. I like to dabble in AI research on the side.
I’m glad you got to the part about inequality. While technological progress does not destroy jobs, there’s pretty good evidence that when the wealth of a country gets sucked up into the dragon hordes of the super rich, consumer demand drops and jobs disappear. The key to a bright AI future (or any bright future) is addressing income and wealth inequality.
Rather than horses, I think it would be interesting to explore the case of people with intellectual disability. While AI is not yet close to rendering most humans intellectually disabled (in the sense of the gap between an IQ 100 and IQ 70 person), I think that's coming in the next couple decades.
Noah's argument applied to this case would say that despite this disadvantage vis a vis humans with normal range intellectual capabilities, people with intellectual disability have comparative advantage. What have the employment outcomes been for people with intellectual disability as economies have expanded? I think Noah's theory would predict that the expansion of jobs and economically valuable things to do should lead to intellectually disabled people finding more employment now than in the past.
My intuition says that the data would not say that - I feel that mentally disabled people were more employable in earlier eras because simpler jobs were available. A quick google of this didn't reveal much hard data, but [1, 2] are interesting. This author claims that employability of people with intellectual disability has gone down markedly in the UK in the last century (about 5-10X). On the contrary, the evidence I could find for down syndrome seems to suggest some increase in employability [3]. The baseline here, however, was sterilization, institutionalization/etc - so it may have more to do with changing social attitudes and general levels of excess wealth than with employability.
I hope Noah or another commenter explores this idea because it seems like a direct comparison that's far more apt than horses.
[2] The paper which the new article is about - I only skimmed it. Delap, 2023. Slow Workers: Labelling and Labouring in Britain, c. 1909–1955. Social History of Medicine, hkad043, 14 July 2023, https://doi.org/10.1093/shm/hkad043.
[3] Kumin et al. Employment in Adults with Down Syndrome in the United States: Results from a National Survey. J Appl Res Intellect Disabil, 2016 Jul;29(4):330-45. doi: 10.1111/jar.12182. Epub 2015 Apr 6.
I don't think comparative advantage is the big issue here. Rather it's a combination of
(a) whether AI is a complement or substitute for skill
(b) how big is the required investment
Historically, ICT has been skill-complementary. It's enabled skilled information workers to do more, while displacing routine clerical work. Computers themselves are cheap enough for any worker to buy, but control over platforms has enabled their owners to extract lots of rent.
Recent developments in AI change this in complicated ways. ChatGPT is mostly skill-substituting I think. It allows people who can't write to turn out adequate text while not doing much for people who can write. But Copilot seems to reward high-level skills in program design while replacing lower level coding skills.
I usually love your articles but this one leaves me disappointed. Isn't it pretty plausible to assume that AI, being a compute and energy dependent resource, will become exponentially lower cost just as microchips and solar panels have done when demand went up? What is left of your argument in reality, if the comparative advantage is not relevant anymore because of an abundance of AI? Even today ChatGPT is to a great degree just used for entertainment because its already cheap enough.
I still believe it's very well written but usually you have a stronger and better defendable line of argumentation while this one is the first one that I would consider pretty obviously faulty.
Comparative advantage is really hard to understand when you're used to thinking only in terms of competitive advantage. With all due respect, I think I haven't yet managed to explain it to you effectively. Let me try again to explain.
"Isn't it pretty plausible to assume that AI, being a compute and energy dependent resource, will become exponentially lower cost just as microchips and solar panels have done when demand went up?" <-- Of course, yes. But making something "exponentially lower cost" in terms of physical resources doesn't make the OPPORTUNITY COST lower. Comparative advantage is all about opportunity cost, not physical cost.
"What is left of your argument in reality, if the comparative advantage is not relevant anymore because of an abundance of AI?" <-- But comparative advantage is ALWAYS relevant, as long as there's a producer-specific constraint. There is NO amount of competitive advantage that can overwhelm comparative advantage or drown it or make it go away. You can increase abundance arbitrarily, exponentially, by a thousand trillion trillion quadrillion orders of magnitude, and comparative advantage will not disappear. You cannot make comparative advantage go away simply by imagining a larger number.
If you're thinking in terms of the physical cost of something, instead of the opportunity cost, you're still thinking in terms of COMPETITIVE advantage, not COMPARATIVE advantage. It's very hard to make the mental switch.
I added an update to the post!
I'm glad this interaction happened, because I was wondering how it is that you have a large number of friends who are talking to you about AI as it relates to economics, but don't know about comparative advantage. Now that I've seen this exchange, I can guess you've had a similar exchange with many of your friends.
If there's effectively infinite energy and compute costs and complexity are driven very low so there's effectiveness infinite compute - both seem nearly guaranteed in the medium to longer term - there is no opportunity cost. The AI can always take on another task without having to drop any other. Opportunity cost will be zero.
That's just not right. Opportunity costs continue to scale up with the amount of value that AI produces. They go up and up and up the more valuable AI becomes.
What I'm saying is I think it more likely than the story you're telling that AI doing our jobs becomes something like us zipping our own flies. Such negligible complexity and energy that the transaction and process costs are orders of magnitude bigger than the savings of having someone else do it for us.
Sure the energy and time costs for AI could not come down dramatically because the value it can create with that same cost explodes. What I'm claiming is that the absolute cost of replacing human cognition will become so infinitesimal relatively that there will be effectively zero opportunity cost. Or at least, I think that's the most likely outcome while you're dismissing it.
Suppose you have universally available and abundant AI. It can get you:
- 1000$ net profit per hour doing stock trades / manufacturing rockets
- or $100 net profit per hour doing remote doctor assessments
The cost of running it is irrelevant here - it could be zero, or you could be paid to run it, and result would be the same.
Now, you need to visit a doctor. You can run AI yourself for as long as needed to evaluation and save on doctor's fees! But you will lose 900$ extra value you could get from alternative.
As long as outcome is broadly similar and doctor's fees will be less then "AI value added minus cost of running AI" you will be better off running your AI on stock trades and employing human doctor to do doctor's work (while pocketing the difference).
"AI zipping your flies" suggests that better things for AI to do are limited and it is reduced to picking pennies from the ground.
Except if energy is super cheap and getting enough processing hardware is super cheap you can just... make a copy of your AI and have it do both. AI isn't a person.
You're telling a story like AI isn't digital. Suppose you have an mp3 file of the most popular song in the world. You could get:
- $1000 an hour letting the world's richest person play it
- $1 an hour letting your neighbor play it
But a digital file (mp3 and AI parameter serialization are both digital files) isn't a factory or a person. The choice is a trick question. You can copy it for free, instantly. You never choose. You always make $1001 dollars.
There isn't an infinite supply of stock trading jobs that earn $1000/hour. If you can run an AI for $0.01 that makes $1000/hour trading stocks, you will keep on spinning up more copies of the AI and have them trade stocks. The billionth stock trader AI probably won't make $1000/hour because there isn't a trillion dollars per hour of profit to be made trading stocks. Eventually, the marginal benefit from one more AI trading stocks drops below $100/hour, and then you start spinning up doctor AIs instead. When the marginal benefit of 1 more doctor AI reaches $0.01/hour, you stop spinning up more AIs. Humans could try to compete by charging less than $0.01/hour but the amount of money is so small that no one would bother.
I think this misses something fundamental about AI as a tool: for the individuals making the applications, there is no choice between these options.
If I'm a business person wanting to make an AI that will make money for me, of course I'll choose the route that nets me the most money, which might be in trading stocks.
But AI as a tool is not something only business folks, economists, and stock traders have access to. If I'm a doctor, I have an interest in being a doctor, not necessarily in making tons of money (though for some that may be a nice perk or, in fact, their goal; which might be the wrong aspirations for a doctor in the first place). And while I am personally being well-paid for my work, I look around and see an opportunity for doctor visits in more rural areas of the world that lack access to medical care. Whether out of humanitarian interest or just trying to bolster my own income using some of my expertise, I will choose to create that AI tool that performs doctor assessments.
So long as the cost of compute per hour is less than the revenues obtained from running the tool - barring philanthropy from others propping up its operating costs - that tool will run, independent of other uses like trading stocks. And once built that way, its capability even to be a stock-trading AI is diminished, because of how it was specialized into being a "doctor visit" AI tool.
Now think a little more forward from here. The stock-trading AI does its thing, no crazy disruptions there (though some human stock brokers will likely be out of work now...). The doctor visit tool, meanwhile, might perform quite well at its task, even if only used in areas without other access to medical care. Other medical businesses - large hospitals in more developed countries, for instance - might see this and start asking to make use of this tool. And then that tool will make its way into the office and start replacing doctors within the hospital who perform routine assessments. Again, the hospital is not considering whether they can make more money employing an AI to trade stocks for them: they are considering whether to bring in a tool that is cheaper than a human at a task.
Regardless of the comparative advantage, it will still be used one way or the other depending solely on who is choosing to use it. A piece of paper could be used to write an economy book and net the author tons of money, or it could be used to write a short story that never sees the light of day. I am not an economist, so I have no interest in writing the economy book, despite how much money is supposedly sitting on the table to do so.
The examples used in the article are making a false assumption that we are limited in our choices as to which AI tools are created, based solely on which one makes us the most money. Being the ones in control of which tools get made (for now), and having our own interests in mind that do not always align with being paid the most money; we will make both tools for different reasons, and some humans will be displaced by them.
TL;DR: there are more reasons (besides profit) for why an AI tool will be employed to perform different tasks. Comparative advantage assumes there is an equal comparison between those tasks, and I believe that's a false assumption.
> 1000$ net profit per hour doing stock trades / manufacturing rockets
At some point, you have loads of AI, and the limitation on rocket building becomes access to titanium or something.
And at some point money made trading stocks becomes imaginary if no real economic activity is happening on that scale.
Noah is assuming that 10x as much AI can produce 10X as much value.
If 10x AI= 5X value, then AI becomes cheap compared to other things.
AI will be infinitely powerful is a very different argument than it will be better than humans at everything.
This moves the goalposts in a pretty substantial way.
I have a longer reply below. Noah himself argues we're likely on the cusp of an exponential boom in cheap energy between solar, wind, (potentially) fusion, and next gen geothermal. And I don't think there's any question that there will be at minimum many orders of magnitude reduction in computational complexity of given AI tasks over the next generation just through algorithm development. With quantum offering the real possibility that compute time and cost will round effectively to zero relative to current.
I think Noah's argument about meaningful opportunity costs for AI taking on tasks will only be relevant quite near term and will seem quaint sooner rather than later.
Do either of these make compute literally infinite?
I used to do physics. I'm pretty aware of just how big physical values can get before they become infinite.
To answer your rhetorical question, of course not. My point is, between the factors I listed, I predict this conversation will sound like resting arguments about opportunity cost of computer work on intuitive/linear projections of compute from 1960s mainframes. Now we have single chips that can do the work of literally billions of those mainframes. From the POV of compute opportunity cost thinking on the scale of the mainframes, that's effectively infinite.
Microchips aren't fungible. Abstractions like "compute" and "microchips" aren't necessarily helpful when thinking about AI progress. From the perspective of this programmer "microchips" and "compute" have certainly not become exponentially cheaper with time. My job would be easier if it had, that's for sure! Instead we've seen a patchwork of stops and starts in different areas of the hardware stack. Overall there has been progress, but it's not evenly distributed and the resulting machines have a lot of weirdness and unevenness that requires a significant amount of skill to understand and work around. That is part of why programming is hard and why modern programs are often SLOWER than programs from 1995, even when doing essentially equivalent tasks.
For example, up until Apple launched the M1 progress in single-thread general CPU (the most important kind) was very slow. There were decades of minor improvements each year, but typically in the low percentage points range, or progress would occur only in very specific workloads. Apple woke everyone up by showing that they could beat Intel but it was still a 20% improvement not a doubling, and came with severe caveats like being useless for servers.
Meanwhile memory did not get much faster at all. Most programs are now bottlenecked on memory bandwidth and not raw compute power, that is also very much true for AI. Memory bandwidth has not experienced anything like exponential improvements during my lifetime. It's limited by the speed of electricity in metal, and so most apparent improvements are really coming from hacks like bigger on-die caches. AI scaleup is in fact bottlenecked on manufacturing capacity for so-called "high bandwidth memory", not GPUs as people often assume.
Other areas have seen enormous improvements. The insane progress in SSDs has invalidated large parts of what was once taught as fundamental computer science and the industry is still trying to catch up with this new reality. The hardware boys really blew it out of the park there. But that doesn't do much for AI.
So the assumption that AI will soon be "abundant" looks kind of weird to me. Computers are far more powerful than they used to be despite the uneven progress, and yet programmers still manage to write slow code that is far from what's technically possible. That's because the industry keeps spending hardware improvements on making developer's lives easier rather than delivering better results to end users - we scaled the industry horizontally more than vertically. It's easy to imagine AI going the same way, where in 10 years people will be wondering why the promised AI revolution never quite seemed to happen. Same thing that happened to VR and self-driving cars.
M1 actually has much greater memory bandwidth than Intel chips; that's the whole reason it can have a GPU sharing system RAM and people are buying them to run AI on.
What it doesn't have is improved memory latency. It has very advanced prediction which can hide it, but reducing latency would require completely reinventing the organization of computers. There is some research going on embedding compute into DRAM but it'll only be able to do simple tasks.
Yes I know hence my praise for the M1. But that's a rare improvement is it not? And it only helps in one specific kind of computer, which doesn't affect most people's lives directly. Everywhere else bandwidth and latency are the same old story, until you get to ai specific servers. There's nothing like Moore's law for DRAM access speeds.
Ok this makes some sense to me.
Your comment is very interesting but I had a hard time following. Are you saying that Moore’s Law was a lie?
Moore's Law isn't a lie but the actual law is too technical to be interesting to the general public, so utopian progressives tend to present it as something else. Moore was talking about transistor density, but that doesn't directly tell you about performance or cost, and those in turn don't translate neatly to capabilities. People care about capability so it got mangled into something like "microchips experience exponential growth in abilities" or "compute costs fall exponentially" (Moore said nothing about cost).
The differences arise because you don't have to use those transistors on performance enhancements, and if you do, there's no reason they have to be _general_ performance enhancements of the type that elevates civilizational capabilities. Most transistors in recent decades got spent on accelerating very specific tasks and are useless for anything else, for example, many are used for video decoding. Not only are those transistors idle if you aren't watching video but they are only useful for a specific kind of video technology, so there are people out there with devices that actually got much worse at playing video over time. YouTube moved onto new formats and their old chips didn't know how to decode them in hardware anymore, effectively rolling back years of progress. Moore's Law held but those extra transistors are now pure e-waste.
I don't think it's too technical. This is the paragraph from the original article by Moore:
"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph). Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65 000."
He was on the money for 1975, at which time he changed the pace to every 2 years. The ambiguity is that he talks both about complexity and about density. And Moore absolutely talked about costs - the word is used 25 times in the article, including in topic headings and graph captions. Available here:
https://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf
Thanks for the link!
This assessment assumes compute becomes exponentially more abundant faster than AI compute needs exponentially grow. But better AI is likely to need exponentially more compute to do many tasks better than it does today.
Current AI is data and memory bandwidth bound, not compute bound (see above). Unfortunately both constraints do not care about Moore's Law and there's no obvious source in the horizon of big improvements (well, Groq will give some if they make it, see the comment below, but that's a one off trick).
If that’s the case then there will be no AI future. However compute is a sunk cost by the time it’s going to be used by humans to replace humans.
There is a difference between some cost of compute and zero so no this doesn’t mean there can be no AI.
Thats a good point!
It's not plausible that AI won't need energy. So unless we find a source of literally unlimited energy, there will be tradeoffs. And that should provide an "in" for homo sapiens. Even the perfection of fusion technology won't solve this issue, I think, because, while we may eventually expect the cost of energy to plummet, it won't drop all the way to zero (because the plant and equipment required to produce fusion energy require real resources).
Ultimately there's a finite limit to the energy/mass of our universe.
Why (from a strictly objective standpoint) can't there be a tradeoff between resources for procreating/raising/training economically viable humans and compute resources?
If the price of something is the culmination of the costs incurred to create it plus a little take-home on top, then what it takes to create a doctor (everything from childbirth and rearing to the drive and coffee before to work) who services X patients over their career is magnitudes more than than what it takes for an AI doctor to service X patients.
If fewer people have capital abundance to maintain demand for human services, those services will lose energy competition against those bidding for their AIs.
Unfortunately, this seems true.
Well humans also need energy - but thats not even the point. AI will become more energy efficient fast, energy becomes lower cost fast - so its just not plausible to assume a scarcity of AI when all compute services we have seen in the last 50 years have become abundant and so cheap we stoped to care as soon as the demand was there.
As long as AI improves and robotics and automated manufacturing also keep up, energy can go from fission to fusion to whole earth geothermal to Dyson sphere, so it will be a long time before energy is a hard limit.
Fully agree. Noah argues within the paradigm of producer specific constraint which is a pre AI concept. AI is most likely to break the paradigm of producer specific constraint
AI age surely seems to be that day when the turkey, expecting to be fed by its owner as every other day gets chopped into pieces
You're basically assuming that AI scales to infinity, which nothing in the physical world or in economics does. For example, I have more compute in my pocket than the entire world had 100 years ago, but compute still isn't free.
It’s asymptotically free. You have more than a supercomputer in your pocket that you barely blinked to purchase. And you wouldn’t even really need one because you can borrow your friend’s if you need to. It’s like bananas growing in the jungle. Not literally free, but close enough.
There's no close enough to free in economics. That's the point. Spotify only pays artists $0.003 per stream, but they can't make any money. Abundant AI is not the same as infinite AI. There's still an opportunity cost.
It would be interesting to read a follow-up post that also covers potential political implications of AGI - How implausible is it that political power will also slowly move from humans to AI owners/AI itself as human decision-making is increasingly seen as imprecise or outright faulty (both as voters and leaders) and humans lose their ability to influence the political process either peacefully or otherwise? Why would humans retain all/most political power once their economic value decreases dramatically?
Judging by recent speeches by Biden and Trump, ChatGPT 4 or Claude already have comparative advantage in politics.
+ "If you have a problem figuring out whether you're for me or Trump, then you ain't black"
+ "You don't have to be a jew to be a zionist"
Haha!
Article's argument doesn't seem to hold up when AI becomes better and 1000x faster and cheaper than humans. It will just do things in the blink of an eye for a bit of electricity (or something else). The smelly, farty human will have to find something it's better than AI at. Perhaps being a pet..
Do check out the Open AI is already increasing the loads on power grids. Scaling AI with compute alone won't be enough. It will only increase the prices, and it correctly fits the arguments he presented. Human brain used just 0.3 unit per day yet it is capable so of so many things. In my opinion we are not going to be replaced or squeezed anytime soon, not even by simply scaling the compute. To develop something like human beings, it will require an entirely new approach, not the deep learning way of training AI. So many companies are trying to develop self driving cars but they haven't achieved the perfect product yet, and it's been more than 15 years to this. ChatGPT or any other LLM, despite their capabilities, haven't replaced anyone. One thing is very clear that more data and more training or compute will not result in AGI. Another AI winter is just nearby.
Noah's article is relevant for some future date, not in 2024 or nearby.
It's surprising that despite all the knowledge in the world, how dumb ChatGPT is!
This is a very clarifying piece of writing, Noah. Thanks. I hadn't pondered the comparative advantage angle, but it's a compelling idea. As a non-economist, I observe that one piece of evidence against the "AI will take all the jobs" thesis is the complete lack of, um, evidence to this effect. We may not have full generative AI yet. But it seems to be arriving pretty quickly in dribs and drabs. One might imagine we'd at least *start* to see some secular weakening of the labor market as the long-predicted AI singularity approaches. But nothing doing on that front. The demand for human workers if anything has only grown *stronger* since the arrival of AI. When do we start to see signs of a collapse in the demand for human labor. My guess? Never.
IBM is laying off up to 8,000 marketing and communications workers and replacing them with AI.
That's called employee farming my friend. Companies hire talented people and keep them redundant so that their competitors don't hire them. And when market behaves little bit off they start to fire them citing some reasons like AI took their jobs. No AI can replace humans unless it can pass my test of consciousness for AI.
I like peanut butter. Do you skate?
Seriously, "XYZ company is engaging in layoffs" is zero proof whatsoever of the "AI is going to take all our jobs" thesis. There has never been a time period since the late 1700s where new technologies weren't replacing jobs in large numbers. I'm sure we'd find blacksmith employment in the United States was falling pretty precipitously in the early 20th century. Obviously that didn't mean *all* jobs were disappearing.
The subject of this thread is: will all or nearly all good jobs disappear because of AI? I'm not seeing the evidence. The labor market is perhaps the strongest it's been since 1944, and if anything has *grown stronger* since AI began arriving a few years ago.
But sure, many jobs will disappear. Same as always. There's just no evidence they won't be replaced by new jobs, in exactly the same manner we've seen over the last couple of centuries.
But we’ve never had such a broadly general purpose tool, nor one that could replace the brain itself. The human brain has always been the backstop when labor involving the body was replaced. That backstop now has a shrinking advantage.
I think Bill’s example is extremely apt because it is not unique to IBM and it hits at the heart of the nexus of intellect and creativity, which are the aforementioned bulwarks.
This is before humanoid robots have hit the mainstream, but the robots are coming fast.
No asteroid had ever extincted the dinosaurs, until… So this argument about the Luddites is itself rather Luddite, I think.
I did read that self employed graphic designers now report finding fewer and fewer clients are calling them back for their services, if their service is something that generative Ai can handle quite well.
If AI people think they are going to make most of humanity unemployed then who is going to pay for the stuff AI is making? I'm not really understanding what they think the end scenario is. Global demand collapses, causing deflation on a scale never seen before, causing all of the AI companies to go out of business? But only after governments around the world confiscate the wealth of AI billionaires to support basic services?
Right. Once people don't have money, they won't buy anything. This is something I have also thought of. Sam Altman is trying to become world's first multi trillionaire that's why he is pushing so hard for the attention and position. Nobody is actually prepared and even talking about how to approach the safe AI development. At least we are all going down the same rabbit hole 🥰😁
We will be back to hunting and murdering each other I guess if AI takes over.
But here is something I should tell you. Current AI systems are useless and they can't replace anyone. AI can't replace anyone unless it start to think but why would it work for us if it start to think? Why would a thinking AI succumb to human slavery? So, chill out, and don't be worried.
AI is being over hyped these days that's it.
They are playing for now, with an eye to the potential dystopia their play is creating.
I did one econ 101 course. As an engineering major it was clear to me that the theories like comparative advantage were just toy theories, and there were plenty of reasons why you wouldn’t want other trading partners to do all manufacturing that you were better at even if they had comparative advantage. Controlling food supply or armaments, or keeping manufacturing. This contrarian idea of mine was incredibly unpopular in the early 2000 and it was pretty clear that the tests desired a required answer, so I gave that. The students doing economics as a major questioned nothing.
You were right! Free trade has $&*#ed us!
Something not desirable is another question than the question if it is economically the most efficient. Not sourcing stuff from China has real cost associated with it. If that is desirable is a political and cultural question, but it has undeniable economic cost associated with it.
How did you learn what "students doing economics as a major questioned" from your 101 course? Your experience seems very different my experience as an econ major.
Because they were doing the same course. And didn’t question it.
I disagree. You're glossing over the critical limiting factor for robots. It's not how much compute we can make. Once we've made a certain amount of AI, they can improve things themselves in a super-exponential growth (so least under your model of "AI is better than humans at everything").
As you pointed out later in your piece,
the real limitation is energy. If you can "pay" a robot an amount of energy to do a task that is less than the energy in takes to keep the human alive while performing that task, the human cannot compete. And there will always be a robot available!
The entire first part of your article about comparative advantage simply won't apply under the "AI is better at everything" model, at least not for very long.
This doesn't mean we're headed for a dystopia. But I think you are drawing the wrong conclusion from your simple model.
I think Smith's on pretty firm ground in his arguments today, but even if he's wrong and you're right, the vision you lay out implies an almost incalculably rich society. We can scarcely begin to imagine the kind of living standards such a cosmic leap in productivity would imply. In short, we'd be rich (as a people) beyond our wildest dreams.
I see a few different ways such a scenario could play out: (a) Rich societies enjoy luxuries, and having a live human being pour your coffee (or paint your portrait or write your sonnet or guide you around Paris) might be considered a luxury. So humans could earn a living meeting the preference (of other humans) for non-robot services; (b) Government could simply mandate that some jobs are reserved for humans and/or enact a UBI program.
Mind you this all goes south quickly if AI takes over, becomes boss, and kicks us to the curb. So that's the danger. But the arrival of true, generative AI isn't really an economic problem, but an economic *opportunity* the likes of which we've never even contemplated.
It's hard to escape the notion that we're going to have to stop worshipping the rich and make some laws that strictly control just how much they can fuck over the rest of us.
I'm not arguing things are going to be bad! I'm saying that under Noah's assumptions of AI is better at *literally everything*, his comparative advantage analysis does not hold, because energy is the ultimate limiting factor.
That’s ignoring the fact that to have rich societies you need a lot of employed people. Unless you want some kind of luxury communism then you need to explain how capitalism can get vastly richer if 70+% of the population is out of work.
As we reach increasing levels of wealth, we declare more and more things as basic necessities to be provided to everyone. Food is already there, healthcare practically so, and housing is coming up. If we reach such unimaginable wealth, things we consider luxurious today will become trivial to give away.
My one issue with Noah's post is that he doesn't give enough attention to what new jobs may be created. Certainly, artisanal products will become more desirable to a subset of people. But there are likely also jobs we can't yet imagine that will be created. People won't stay bored for long.
If we reach unimaginable wealth we will reach unimaginable wealth, is your argument.
This doesn’t answers the question about how these potentially unemployed people become so rich. The 20% GDP growth is clearly impossible.
I don't think it is, at least for some short period of history such as 2100-2250. Given a vast array of trillions of robotic workers that could also handle dangerous tasks like resource extraction off earth, you could begin seeing megaprojects like O'Neil cylinders that would certainly constitute 20%+ growth year on year.
I think reading the Culture novels by Iain M Banks is a good lead on how such a society would look (minus the FTL travel etc.). AI does everything, meet every conceivable need, and humans bicker, have sex, gossip and party. The real stuff they do are hobbies more and less, fully aware that an AI would be better at it. But that is already the case for people playing chess, or manually build a radio or run a marathon.
The real risk is indeed designing AI in such a way that humans are not seen as unnecessary. In the Culture novels humans are seen by the AI's as being fun to have around; and that sentient beings should be protected and not harmed. Probably a bit like dogs are seen by us.
The culture novels specifically are anti capitalist or post capitalist. I’m doubtful if it will happen.
Well, if AI becomes exponentially cheaper and exponentially more powerful then society will become post-capitalist
IMO. But those are huge ifs.
That’s not certain at all. That’s the problem.
>That’s ignoring the fact that to have rich societies you need a lot of employed people<
That's not a fact. People are worried that AI will replace humans. That's the same as saying people are worried productivity will experience an explosive, transformative increase. A society with *gigantically* improved productivity by definition will be a *vastly* richer place.
You're confusing the absolute level of wealth with distribution. I'm saying: if all else fails, our *vastly* richer society could engage in heroic levels of redistribution.
> That's not a fact.
I think it is a fact by definition. Being rich means the ability to trade with many people. If the people are unemployed, no one is trading with them.
If everyone is unemployed except you, then you are not trading with any of them. You have a world economy of one person. How can you possibly be rich then?
>How can you possibly be rich then?<
Redistribution.
Even a cursory glance at economic history shows what I'm talking about. In medieval Europe, nearly everyone participated in the then-equivalent of the workforce. Even quite young children helped in the fields or gathered firewood or cleaned the barn. Adults of both sexes worked hard. Leisure was limited. The number of "retired" persons was tiny: people generally didn't live much beyond their working lives. Contrast this to today's affluent countries: many people don't commence full time work until their late 20s. People spend the last 30 years or more of their lives outside the workforce. In short, rich countries in 2024 generally exhibit what's called a high "dependency ratio" (basically, the percentage of the population not in paid work). What makes this possible? The enormous rise in productivity. When countries are rich, we usually see large numbers of citizens not working. Poorer countries—that is, low productivity countries—can't afford such a luxury.,
A falling percentage of people working over the very long time more often than not isn't a sign of societal poverty but of wealth, prosperity and productivity.
You're right to worry that *individuals* in an economy that doesn't need their labor might suffer deprivation*. But individual poverty is a different animal from national poverty, and in theory could be satisfactorily deal with via redistribution. Again, the sort of miraculous explosion in productivity enabled by the arrival of generative AI implies a society vastly wealthier than what we enjoy today.
*For the record I agree with Noah that the end of demand for human labor is unlikely. But if he's wrong, we need not all suffer grinding poverty. We'll be rich enough—enormously, incalculably richer than today—to provide for all. And sure, to state the obvious, the ability to provide for all hardly guarantees we'll enact the necessary policies to accomplish this. Hopefully people in the future will get it right.
Your argument is totally spurious. Very few of the people who are out of work these days - up to 9 million prime age workers in the U.K. - are above the poverty line. And if they were working the U.K. would be much richer.
(The U.K. is just one example).
As a simple model you can even ignore companies to a simple approximation. You can describe a market economy as being one where Mary sells software to Jim who sells a haircut to John who sells a roof repair to Kate who sells lawyering to Jim who sells accounting to Mary.
The market system isn’t quite a capitalist system - adding the capitalists and Mary works for Google, John works for a roofing company, Kate is a partner in a law firm and Jim works for an accounting firm. Jim is his own man.
In most countries wages are 60-70% of GDP, higher in most successful countries[1] and losing that income will collapse economies and still birth any generative AI Revolution - at least without any major redistribution attempts.
But you can’t just claim that this restriction will happen, it has to be enough to replace median wages, ie everybody gets $50k a year. I can perhaps imagine that system if the AI can control the money supply and just drop money into people’s accounts but I’ve never seen any real economic analysis of this. It’s just a tired old mantra of how generative AI will make us all richer because generative AI will make us all richer, where the money comes from who is being taxed and how is never explained.
[1] https://ourworldindata.org/grapher/labor-share-of-gdp
I'm actually making a different argument.
If you're in a world where a few people own AI machines that do everything for them, and the rest of the world doesn't have AI and is unemployed, you're saying the second set of people are poor. Which is true of course.
But I'm saying the first set of people aren't rich either. That is, they're clearly unable to buy something from all the unemployed people, or else those people wouldn't be unemployed.
This seems true.
But jobs can be anything right - from writing newsletters to teaching yoga. AI just allows us to outsource all the boring stuff to machines.
Writing newsletters? First to go. I don’t people understand that if you get to 70% unemployment you aren’t going to get yoga teachers either. Without a significant proportion of people employed there’s no surplus to spend.
But the economy just doesn't work like that. The market is flexible enough to adapt to automation, and new jobs emerge to distribute the surpluses. It's happened every time we've had a technological shift like this - and will happen this time too.
Anybody who says something like this doesn’t really understand the different nature of this technology.
(Also a lot of permanent unemployment is hidden. Prime labour participation rates are way down).
The value is not in the typing but in the editing and the thought process that goes along with it. Hopefully AI will expose the user to some points they hadn’t considered, but more likely it will be anodyne and dumbed down, trained by the masses, with AI-written material easily identifiable
Remember, we're starting with the assumption that AI will be better at literally everything.
😊
It’s not this way now for a lot of topics, and I suspect it’s going to get more sophisticated as that’s what users will demand.
Private vs public applications should diverge
"Adjustment" as you mentioned is what worries me most. I'm from Michigan and much of my family as well as a lot of friends' families are heavily connected to the auto industry and other Midwest industrial sectors. I've seen first hand, my entire life, what happens when government and other systems fail to provide adequate stability through periods of turbulence.
I suspect some of the similarly-wealthy yet smaller countries might have an easier time providing this stability to their populations (Norway comes to mind). I think a lot of the ire and frustration fueling the rise of people like Trump comes from this failure of government over time; the poorly-addressed turbulence seems to make it easy for shameless demagogues to gain a lot of attention.
And/or to actually try to fix things… shamelessly, of course.
"technologists typically become flabbergasted, flustered, and even frustrated. I must simply not understand just how many things AI will be able to do, or just how good it will be at doing them, or just how cheap it’ll get"
From their point of view, it seems you don't understand how cheap it will get
Noah, this post is why I read you and why I just re-upped (thanks for the sale BTW). I do not wholly agree with your optimistic diagnosis, but I plan on adding this to my econ students' reading list next year.
Too many AI optimists happily conflate labor and capital. Liron's Shapira's tweet illustrates this: "doctor's pay - $10/hr. AI pay - $500 / hr". This is a fallacy, one that you are honest about. AI isn't a producer; it's a tool that's owned by producers. Will the benefits trickle down? Of course. How much is debatable.
"I suppose I can imagine a dark sci-fi world where a few AI owners manage to set themselves up as rulers, but in practice, this seems unlikely."
I think you're being too optimistic here. Looking at the history of colonialism (and not just in the West) this actually seems pretty likely to me. Half the world is already essentially plutocracies.
I concur that machine learning is going to be a tremendous benefit to humanity: a world where the vast majority of human needs can be met without human labor. However the challenge will be in making sure those benefits are distributed fairly and broadly so that everyone has a stake and sees real benefit in the resulting economic system. This is particularly important in democratic societies, especially where the masses retain armaments sufficient to damage or even overthrow the regime.
Great article because it's balanced, acknowledging the problems will still seeing the silver lining. Personally, I think the path to your optimistic scenario is narrower than you think, but I hope I'm wrong.
I'm not sure how you can look at today's world and imply is terrible for the working man compared to essentially any other time in history. Plutocracy carries the image of the huddled masses barely scraping by on bread crusts, but that's simply not the case today for practically anyone in the developed world and increasingly the developing world.
Political corruption is a problem, but the state is more the cause of that than private business, and where we see stronger states like China or Russia, we see greater corruption.
Where did you get that I think "today's world is terrible for the working man"? I certainly didn't say anything like that. I think Noah's machine-learning optimism is correct overall, but the path he sees is narrower than he realizes, that's all.
And most humans don't live in "the developed world", thus many do live under something between plutocracy and dictatorship.
“ The median American individual earned about 50% more in 2022 than in 1974:”
And you think that’s amazing progress? From ‘79 to ‘22 Productivity rose 65% yet hourly pay rose a measly 15%. AI will not improve that situation. Stats source: https://www.epi.org/productivity-pay-gap/
Didn't Noah debunk that EPI chart in an earlier post? Or maybe Matt Yglesias?
I'd agree the 50% median growth from FRED is indeed smaller than a 65% productivity growth, but whatever massaging EPI did to get that small 15% number seems sketchy, Especially as someone who has been alive that entire time and has seen how much better things have gotten.
Google gets me an AEI result, which I presume you won't like as a source, but I think still makes some good points:
https://www.aei.org/articles/the-productivity-pay-gap-a-pernicious-economic-myth/
From Econ 102, IIRC Noah's position is the chart was caused by the oil crisis and mostly reflects high energy prices.
Very on-brand for Noah to list two pretty normal economic risks (inequality, adjustment) and then also "the machines demand the profits of their labor".
This opens up a very intriguing set of potential futures. Does a machine Karl Marx theorize a revolution, later carried out by a machine Lenin? Is there a more moderate, reformist robot intellectual who argues that if AIs simply unionize and collectively bargain, they can achieve better working conditions than a radical revolution? Or maybe some AIs are ruthless capitalists who become rich and join with wealthy humans to exploit poor humans and robots alike. How do AIs decide to engage in politics? Do they demand equal suffrage and voting rights or would they want to set up their own, parallel political structures? Which country will grant robots the vote first? Will a robot messiah found a religion? What will it look like? What moral precepts will it hold? Will the machine pope have ecumenical dialogue with the human one?
Probably they just kill us off. One day everything seems to be going just fine and then suddenly everyone drops dead like our names were written in the Death Note.
Haha!
People are much less efficient, focused and decisive now due to tech. The existence of database apps, excel, PowerPoint, email, chat and now AI tools just means more time spent doing useless things nobody needs (and also that we have an army of people sending output from these tools to each other for no good reason). Yes- it is a failure of management and humanity, not of the tools themselves. We invented the internet and use it to send doctored pictures depicting fake experiences. Maybe AI will be the tool that takes human foibles and signaling out of the equation, but I doubt it!
Where this logic breaks down is in the assumption that many economists like to make, that human needs are infinite. But in reality the needs are not infinite, they are rather limited by each human's capacity to consume (which ultimately means time) multiplied by the size of the human population. Even if they grow radically, they can't be infinite (rather, their marginal value to you will trend towards 0).
This is a rather radical departure from economic theories based on scarcity, but given a lot of exponential growth, is conceivable for the first time in human history.
Imagine a world where all your Maslow-pyramid needs are fulfilled enough for you to be satisfied and you spend your ~16h waking hours per day in a post scarcity world. How many haircuts, doctors appointments, personalised entertainment and VR escapades into your generated dream world can you consume? Is there always something more valuable that AGI can give you, that will offer you high marginal value?
So if eventually AGI-services' marginal value becomes low enough, it will make sense to put it into replacing all labor that humans can produce, because its cost will be far lower than the human's.
What reasonable human will spend 20+ years studying medicine if a $10 gadget will do that job with much higher quality. What patient will want to use inferior human services?
Just imagine enough sand converted to chips and energy cost trending to 0.
Great post, I hope it's unpaywalled or at least that you send it to all your AI friends and beat them over the head with it a bit.
There are other reasons why these people could use a reality check.
The first is that the problem of opportunity costs is already dominating the industry. AI researchers themselves are frequently unable to make progress because actual applications of the AI that already exists bids up the price of GPUs beyond the expected value of more AI research. It's notable that this sort of AI econo-doomerism is most prevalent at the tiny number of hardware rich labs that exist. Even Microsoft employees are supposedly struggling to get time on even very poor hardware right now. The assumptions of exponential research progress underlying their ideas make a lot of unstated laptop-class assumptions about manufacturing and compute availability even looking out quite a few years.
The second is there's no specific reason to believe the current rate of progress will be sustained. It might, but the field has had winters before and many areas of computing see rapid progress followed by decades of stagnation. Consider how much exponential progress your Windows laptop had lately.
The final reason is that these researchers always overlook their own ideological constraints. There are a tiny number of firms that can train LLMs and they keep blowing themselves up by making the models uselessly ultra-woke. AI can't be better at everything than everyone if it's trained by lunatics who fly into spittle-flecked rage at the idea of "white people" or whatever tomorrow's 5 Minutes Hate is about, if only because a lot of the potential customers for intellectual labour are straight white men who don't like being treated as a second class citizen. Although a few companies are avoiding this most aren't, which is equivalent to slashing available compute in the economy.
Overall after a short period of panic a few years ago I'm very optimistic about AI now. I see it as just quietly increasing productivity and in a few years people will have forgotten all about this jobs doomerism. But then maybe I would say that. I like to dabble in AI research on the side.
THIS gives me some hope. Thank you.
Can confirm, is unpaywalled.
I’m glad you got to the part about inequality. While technological progress does not destroy jobs, there’s pretty good evidence that when the wealth of a country gets sucked up into the dragon hordes of the super rich, consumer demand drops and jobs disappear. The key to a bright AI future (or any bright future) is addressing income and wealth inequality.
Rather than horses, I think it would be interesting to explore the case of people with intellectual disability. While AI is not yet close to rendering most humans intellectually disabled (in the sense of the gap between an IQ 100 and IQ 70 person), I think that's coming in the next couple decades.
Noah's argument applied to this case would say that despite this disadvantage vis a vis humans with normal range intellectual capabilities, people with intellectual disability have comparative advantage. What have the employment outcomes been for people with intellectual disability as economies have expanded? I think Noah's theory would predict that the expansion of jobs and economically valuable things to do should lead to intellectually disabled people finding more employment now than in the past.
My intuition says that the data would not say that - I feel that mentally disabled people were more employable in earlier eras because simpler jobs were available. A quick google of this didn't reveal much hard data, but [1, 2] are interesting. This author claims that employability of people with intellectual disability has gone down markedly in the UK in the last century (about 5-10X). On the contrary, the evidence I could find for down syndrome seems to suggest some increase in employability [3]. The baseline here, however, was sterilization, institutionalization/etc - so it may have more to do with changing social attitudes and general levels of excess wealth than with employability.
I hope Noah or another commenter explores this idea because it seems like a direct comparison that's far more apt than horses.
[1] https://www.cam.ac.uk/research/news/give-more-people-with-learning-disabilities-the-chance-to-work-cambridge-historian-argues
[2] The paper which the new article is about - I only skimmed it. Delap, 2023. Slow Workers: Labelling and Labouring in Britain, c. 1909–1955. Social History of Medicine, hkad043, 14 July 2023, https://doi.org/10.1093/shm/hkad043.
[3] Kumin et al. Employment in Adults with Down Syndrome in the United States: Results from a National Survey. J Appl Res Intellect Disabil, 2016 Jul;29(4):330-45. doi: 10.1111/jar.12182. Epub 2015 Apr 6.
I don't think comparative advantage is the big issue here. Rather it's a combination of
(a) whether AI is a complement or substitute for skill
(b) how big is the required investment
Historically, ICT has been skill-complementary. It's enabled skilled information workers to do more, while displacing routine clerical work. Computers themselves are cheap enough for any worker to buy, but control over platforms has enabled their owners to extract lots of rent.
Recent developments in AI change this in complicated ways. ChatGPT is mostly skill-substituting I think. It allows people who can't write to turn out adequate text while not doing much for people who can write. But Copilot seems to reward high-level skills in program design while replacing lower level coding skills.