Alex Imas' argument about AI increasing demand for a human touch is intuitive. But nursing is shift work and can break your back. Childcare involves dealing with loud and screaming children.
My hope is that Jevons paradox leads to more jobs in AI exposed areas.
Yes! And all the jobs that Imas has listed are low-paying and low-status. Try telling an investment banker displaced by AI that there are plenty of jobs in nursing or childcare.
Childcare is a pretty classic case of baumol's social cost disease. No matter how much productivity increases, even if every factory is a humanless dark factory, even if AI can do all office work, a human can only manage 3 to 4 young children. Childcare worker needs will stay pretty consistent.
Completely agree but, sadly, childcare workers are paid a pittance for doing vital work. I don't see any way that AI will change the status and pay of these workers.
In every non-western country I've lived in its pretty common to have a woman from an even poorer country as a nanny. Not a perfect solution. I'm sure a lot of these women would rather be home raising their own kids, but if the US had a few nanny visas where qualified women from the Philippines or Ghana could work such a job for $1,000 a month I'm sure the demand from both workers and American families would outstrip supply.
I look forward to abundant child care centers staffed by (non-union) robo-nannies. They will be very knowledgeable and tireless and there is much smaller risk of abuse or neglect from dedicated single purpose robots than from the current labor force that includes not a few fickle workers who only work for the wages.
Yes, completely. To add to this, the promise of the pre-AI and early-AI internet has been to make knowledge work of various sorts easier to do from everywhere, including at home. Imagine telling people used to an era of remote and hybrid working that the mature AI internet means losing that flexibility. Wouldn't sound like much of a quality-of-life upgrade anymore.
What absolutely noone is saying aloud here is that a world, where the majority is paid for providing a little bit of human touch, is deeply unequal.
Have you ever been awed by the heroic skill of your barista or the sophistication and deep understanding of the person bagging your shopping? If you see work as the main avenue to value and meaning in life, this new service class will be largely excluded from that.
I much prefer getting my lattes from an automated coffee drink machine than from a slovenly tattooed slob expounding on Marx’s labor theory of value, which is why I don’t go to coffee shops.
The world has always been deeply unequal. Strangely, the level of inequality has remained much more constant than the level of income. The average income level in the US is about 100 times what it is in Ethiopia, but the Gini index of the US is 0.45, only a bit more than Ethiopia's 0.31. The biggest determinant of how well off either the rich or the poor of a country are is the average income level.
> If you see work as the main avenue to value and meaning in life, this new service class will be largely excluded from that.
To have work be an avenue to meaning, you have to internalize "All honest work is honorable." If you don't, and feel "Work is meaningful if it pays well above average.", then most people are *always* going to be excluded from that.
Ethiopia is utterly different. When you are truly poor only your level of income matters. Food, health, shelter. Then clothes.
In the US no one is poor in that sense. You have moved up in the pyramid of needs and want to participate socially now. You want to feel “middle-class“. You have certain expectations how that looks like from movies or childhood memories. But you are still human, not a rational machine.
You do not think: "Work is meaningful!"
If you create physical things, you do indeed think: "Look world, this is my creation!". Artisans can be happy people.
If you are a fireman, you receive that moment of reward sometimes in the faces of people. Or from your co-workers: you are the thin red/blue line. Together. Brotherhood.
These things are deeply human and not abstract. And if you work the grill at McDonalds, if you bag groceries, if you work the counter at Dunkin' Doughnuts, there's nothing of that. No one sees you, no one values you, even your co-workers don't really give a fig about you. And after your shift you will dream of killing insurance execs or burning down Pacific Palisades villas.
AGI with complementary robotics is meant to replace human labor. The cotton gin was meant to assist humans in the production of cotton. The mechanized automobile and truck were meant to replace the horse. The horse died off. Humans are the horse in this scenario.
I still haven't seen a single story about a new class of jobs created by the implementation of AI. Certainly, having expertise in framing tasks for AI will be a job in a company, but you won’t need millions of people who are good at prompting AI. I've been waiting for new jobs, and after four years, none have been identified.
I recognize we are only in the 3rd inning of AI implementation. Congress is doing nothing forward-thinking; the states are, but the Trump administration threatens to sue states that pass AI restrictions. It is a mess. As far as changing the message? Sure, why scare the bejesus out of people? I spent my youth practicing duck and cover in case of a nuclear missile strike. We would get under our desks and put our arms over our heads. Ridiculous.
As ineffective as cloth masks, then the even more absurd double masking. When self-driving cars and trucks finally take over, millions of Uber and Lyft drivers will be out of work and the gig economy. Long-haul truckers' jobs will be threatened. To deny this is the probablity is to deny the obvious.
AI has many promises, and I hope they come to pass, but to ignore the potential for widespread job loss is idiotic.
Nuts that you bring up the cotton gin as an example of an innovative that was meant to replace human labor. Eli Whitney hoped his invention would replace the need for slave labor, but inadvertently caused an dramatic increase in the slave economy since his invention made cotton so much more profitable.
Lol, well if slavery had been illegal than wage work would have been used to harvest cotton. Political economy matters a lot. Shame Eli didn't wait a few years longer. Maybe slavery would have died out naturally by then.
A long haul trucker’s job is tedious, requiring long stints away from home and is dangerous not just for the worker, but for anyone else using the road, and it would be good if it was eliminated. Many jobs are not worth saving, and most humans can adapt to better ones.
Sure there will be disruption and stranded assets, but big rigs will probably retain value (unless the war drags on and diesel goes over $10/gal) because it can probably be more easily modified for self-driving than an Uber driver’s Camry. The Tesla semi as being released is much better than the one they originally announced and looks like it will be in demand even before autonomy takes over. And I think it will take more than a few years before all driving jobs are gone.
Buzz if it's anything like the IRA plan to build 500,000 chargers, you, I, and our kids may be worm food before they are built. I suspect the Teamsters will require someone to be in the cab. They say safety, issues of theft, anything, in order to keep Teamsters paying money to the Union.
And I think drivers of Tesla rigs should be called Semi Conductors. The experience, even without autopilot is much better than current rigs - central driving position with comfortable seats and 6’5” standing space for mobility, multiple cameras, collision avoidance, no clutches or 15 gears of shifting, much quieter and smoother operation, better acceleration and hill climbing than diesel and 500 mile range. Almost want to take a test drive.
The Occam's razor explanation is that yes, this is all just PR, because we have abundant evidence that Altman, Huang, and Andreessen do not care whether the statements they make are, in fact, true.
Or to say it more sharply, when you're trying to attract customers or investors, you have to argue that AI is profitable to adopt, and that comes to "AI can do some work cheaper than humans". OTOH when you're trying to keep the Luddites from crushing your business, you have to argue that AI will create jobs in one way or another. But of course, the job of the boss-man of a startup company is to say what needs to be said to make the company successful.
I wish you'd do more economic analysis about whether *you* believe it. You've made a directional argument that research choices could make it more true at the margin, which is of course basically a tautology. I'd love to see you engage with the core economic question --- do you think this has a good chance of substantively working, or do you think this is primarily marketing-speak that can work a little at the margin? (My bias is towards the latter, but I'd love to see your in-depth thoughts on this.)
Noah writes, "But I do think it might be possible for AI researchers to concentrate their efforts on AI applications that give humans superpowers, rather than on trying to copy what humans already do. Once they stop thinking “This technology is a replacement for the human species”, and start thinking “This technology is a tool for humans to use”, the direction of their research programs might subtly evolve in a more labor-augmenting direction."
I'd like to read more about this. When the first supercomputers were built, I think they were used for engineering and science applications, to great success. If AGI can become the world's smartest engineer or scientist (add to this list as fits the example), what more do we need of AGI and why? I suggest that we don't need AGI to write novels, create "art" and videos, act as surrogate parents, become a porn avatar, organize our daily lives, and so on. Those initiatives and just costly slop that doesn't advance civilization. All of those applications lead to really negative consequences, as evidenced so far about our increasing regrets for what social media has done to us as a society.
Furthermore, I'd really like someone who knows about this stuff to answer this question:
Isn't the need for more and larger data centers caused by the capacity required to accommodate the useless AIG slop I mention above? If data centers were solely dedicated to science, medicine, and engineering, we'd probably need fewer data centers, right?
I appreciate where you are coming from on this, but like Glau says below, if AGI can write songs as well as the make a great promotional video for it, doesn't that concern you a bit? Maybe not being able to get a copyright on AIG produced creative content will help. I too am an artist, doing landscape illustrations, so for me and other illustrators even non-copyrighted AGI material is a threat. My main point is that this AGI material isn't even necessary. There are plenty of artists out there to meet demand. And to allow AGI to dominate the arts requires the building of enormous data centers with intense electrical and water demands. Except for engineering and science, I don't think its worth the tradeoff.
Should we ban people from being able to generate video this way? Why should I as a musician have to pay chins of thousands of dollars to other people for a video I can create with AI for a couple hundred.
Or if i'm a AI video producer, why should I have to pay somebody a whole bunch of money for music.
I do think that eventually culture will probably change some, and there will be a premium put on music that is created by actual people playing live.
I think we will likely stick with the current situation which is probably the correct answer.
Just like people can study the works of composers and artists before them so can AI models. As long as they pay for the the works of art.
Of course. If what AI creates is too close to a copyright violation, then they can still be sued just like if you rip somebody off directly. And aren't just inspired by it. You can be sued.
Your footnote at the end about avoiding a human Uber driver has a lot more to do with avoiding annoying humans than wanting to avoid humans overall. Get enough annoying human interactions in any situation and you'll make people want automation instead - especially if the automation is tailored to your preferences. Part of the 'paid to be human' will be 'that other humans want to be around.' I guess the real question is whether humans will become annoying enough to each other IN GENERAL that we'll always prefer our automated companions.
If "jobs with human touch" become the only kind of available jobs in the future, many annoying people will went to those jobs (because it is the only game in town), creating even more experiences of having to interact with annoying people
"No one meets up in person and we spend all our days looking at screen and our phones" suggests that the modal preference is for less rather than more human touch.
Your “AI giving humans superpowers” point seems to run into the same issue you identify with the other arguments: the market isn’t going to optimize for human participation - it’s going to optimize for profit. This isn’t a charitable exercise. If replacing people is cheaper or more efficient than augmenting them, that’s what will happen.
I don’t see a clear path where large numbers of people remain economically valuable - especially looking out a few decades. And trying to contort the system to make humans feel useful seems likely to create more problems than it solves. The real conversation is about redistribution.
Welfare state does not solve the problem of AI eliminating most jobs. People need a sense of belonging and contributing. Putting people on the shelf, so to speak, will just encourage lots of violent push back. This would not end well. One only hopes that AI produces more jobs than it eliminates. I am reminded of Cordwainer Smith's distant vision of humanity where our species is on the brink of extinction due to too much safety and lack of purpose.
Have you read Vonneguts player piano. I haven't looked at it in 50 years, but I remember him describing a world with a small care of techs and everyone else being forced into either military or government make work
This is just new, imprecise messaging to replace the old imprecise messaging, but with a misleadingly positive spin. AI was never going to replace all jobs, just the good ones: high skill, high wage office jobs. Jobs which are dirty, unpleasant, low skill, and/or poorly paid (frequently all of the above) will expand in total number. Over the long term, the labor market must clear at some wage, though. Even if it’s possible to build an AI-driven robot to change bedpans in an old folks’ home, doing so will probably be more expensive than paying some otherwise-unemployable former software engineer $8/hour to do so. AI driven tax revenue or capital gains will provide the funds to pay this new bedpan-changer (depending how the old folks’ home is funded). Still, this isn’t exactly the labor market from Star Trek.
The last years of her life, my Grandma had a special mattress that would automatically inflate in different parts to adjust pressure on the body to prevent bed sores. Routing a bedridden person every few hours to prevent bedsores used to be a very labor intensive process that required a physically capable, dedicated nurse. My point is there is definitely some innovation happening in nursing as well.
That's pretty much what I'm finding. I bought a subscription to Claude Code so I could experiment with it. I found it a useful tool to look for bugs -- it can read a lot of code fast and find bugs without getting bored checking for errors.
For actual coding, I found it marginal. It always made mistakes, which means that I always have to review everything it does and then reprompt it. After about 3 cycles, I realized that for a good number of changes, it's easier and faster to do it yourself.
On another topic, one big class of job you missed that will be hard to automate is enterprise sales. People want someone who will pickup the phone after you make a big sale, and connect you immediately with someone who can address whatever issue you're running into.
I decided to cancel my subscription at the end of the month.
BTW, I think that pricing is likely to be the biggest risk for these large AI companies. Right now, there's lots of demand because everything is subsidized by a significant factor. When those subsidies go away, you'll see what functionality people will actually pay for. Or as people said during the dot com bubble -- "You can see who's swimming naked when the tide goes out."
What makes this interesting is that you are not just noticing a messaging change. You are asking what pressures forced it.
That is the better question.
For a while, too much of the AI industry sounded like it was trying to sell the public on its own economic redundancy and then acting surprised when people did not greet that pitch with balloons and cake. So yes, this newer “AI will create jobs and augment humans” line is obviously better politics.
The harder question is whether it is only better politics.
I also think your distinction between “warning” and “threat” is one of the sharpest parts of the piece. If people believe AI progress is an inevitable force of nature, then the job-loss rhetoric sounds like grim realism. If they believe this is a set of choices being made by companies and researchers, then it sounds a lot more like someone smiling while describing your replacement.
That is a very different emotional register, and it matters.
What I’m still not convinced of, and I suspect you aren’t either, is the comforting long-term story about the “human touch” automatically absorbing the damage. Maybe some of that will be true. Maybe not. But you are right that the industry suddenly has a very strong incentive to make the future sound more human-compatible than the last version did.
It feels as if a lot will hinge on how successful agentic AI is: if it can interface very widely and find its way into very diverse business niches, then initially there will be a need for humans to help it interface across applications by taking outputs from one AI app and adapting them to work as inputs in another AI app. But once it has proved itself within a particular application, the economic incentive to get AI to take the next step and bridge the contextual and functional gaps between applications, cutting out the middle (hu)man, will be very strong.
On the other hand, if it gets stuck in the coding, MS Office assist and answer engine ghettoes, then its influence will be fairly limited, I'd have thought.
If agentic AI does succeed in editing people out of work flows, the returns to capital will be immense and the potential power of the owners of capital to undermine, bypass or simply ignore democratic institutions will be alarming. People like Altman are already stretching the news agenda around to suit their purposes. The only thing that's consistent between Altman's initial line that AI is so powerful it could destroy human purpose and his volte face position that actually it will create jobs, is that both belong to the standard tech marketing approach of persuading people that the new product is a powerful friend.
An analysis I have not seen (it may exist somewhere) is a comparative value assessment of human production vs AI production. There is an incredible amount of energy used for AI tasks, and the enabling supply chain is long. Wonder if the $/unit compare favorably?
At any given moment it's an interesting question. But people know of Moore's Law; the price and energy consumption per unit of AI thought will decline exponentially forever ...
Alex Imas' argument about AI increasing demand for a human touch is intuitive. But nursing is shift work and can break your back. Childcare involves dealing with loud and screaming children.
My hope is that Jevons paradox leads to more jobs in AI exposed areas.
Yes! And all the jobs that Imas has listed are low-paying and low-status. Try telling an investment banker displaced by AI that there are plenty of jobs in nursing or childcare.
Childcare is a pretty classic case of baumol's social cost disease. No matter how much productivity increases, even if every factory is a humanless dark factory, even if AI can do all office work, a human can only manage 3 to 4 young children. Childcare worker needs will stay pretty consistent.
Completely agree but, sadly, childcare workers are paid a pittance for doing vital work. I don't see any way that AI will change the status and pay of these workers.
In every non-western country I've lived in its pretty common to have a woman from an even poorer country as a nanny. Not a perfect solution. I'm sure a lot of these women would rather be home raising their own kids, but if the US had a few nanny visas where qualified women from the Philippines or Ghana could work such a job for $1,000 a month I'm sure the demand from both workers and American families would outstrip supply.
I look forward to abundant child care centers staffed by (non-union) robo-nannies. They will be very knowledgeable and tireless and there is much smaller risk of abuse or neglect from dedicated single purpose robots than from the current labor force that includes not a few fickle workers who only work for the wages.
Yes, completely. To add to this, the promise of the pre-AI and early-AI internet has been to make knowledge work of various sorts easier to do from everywhere, including at home. Imagine telling people used to an era of remote and hybrid working that the mature AI internet means losing that flexibility. Wouldn't sound like much of a quality-of-life upgrade anymore.
What absolutely noone is saying aloud here is that a world, where the majority is paid for providing a little bit of human touch, is deeply unequal.
Have you ever been awed by the heroic skill of your barista or the sophistication and deep understanding of the person bagging your shopping? If you see work as the main avenue to value and meaning in life, this new service class will be largely excluded from that.
I much prefer getting my lattes from an automated coffee drink machine than from a slovenly tattooed slob expounding on Marx’s labor theory of value, which is why I don’t go to coffee shops.
The world has always been deeply unequal. Strangely, the level of inequality has remained much more constant than the level of income. The average income level in the US is about 100 times what it is in Ethiopia, but the Gini index of the US is 0.45, only a bit more than Ethiopia's 0.31. The biggest determinant of how well off either the rich or the poor of a country are is the average income level.
> If you see work as the main avenue to value and meaning in life, this new service class will be largely excluded from that.
To have work be an avenue to meaning, you have to internalize "All honest work is honorable." If you don't, and feel "Work is meaningful if it pays well above average.", then most people are *always* going to be excluded from that.
Ethiopia is utterly different. When you are truly poor only your level of income matters. Food, health, shelter. Then clothes.
In the US no one is poor in that sense. You have moved up in the pyramid of needs and want to participate socially now. You want to feel “middle-class“. You have certain expectations how that looks like from movies or childhood memories. But you are still human, not a rational machine.
You do not think: "Work is meaningful!"
If you create physical things, you do indeed think: "Look world, this is my creation!". Artisans can be happy people.
If you are a fireman, you receive that moment of reward sometimes in the faces of people. Or from your co-workers: you are the thin red/blue line. Together. Brotherhood.
These things are deeply human and not abstract. And if you work the grill at McDonalds, if you bag groceries, if you work the counter at Dunkin' Doughnuts, there's nothing of that. No one sees you, no one values you, even your co-workers don't really give a fig about you. And after your shift you will dream of killing insurance execs or burning down Pacific Palisades villas.
AGI with complementary robotics is meant to replace human labor. The cotton gin was meant to assist humans in the production of cotton. The mechanized automobile and truck were meant to replace the horse. The horse died off. Humans are the horse in this scenario.
I still haven't seen a single story about a new class of jobs created by the implementation of AI. Certainly, having expertise in framing tasks for AI will be a job in a company, but you won’t need millions of people who are good at prompting AI. I've been waiting for new jobs, and after four years, none have been identified.
I recognize we are only in the 3rd inning of AI implementation. Congress is doing nothing forward-thinking; the states are, but the Trump administration threatens to sue states that pass AI restrictions. It is a mess. As far as changing the message? Sure, why scare the bejesus out of people? I spent my youth practicing duck and cover in case of a nuclear missile strike. We would get under our desks and put our arms over our heads. Ridiculous.
As ineffective as cloth masks, then the even more absurd double masking. When self-driving cars and trucks finally take over, millions of Uber and Lyft drivers will be out of work and the gig economy. Long-haul truckers' jobs will be threatened. To deny this is the probablity is to deny the obvious.
AI has many promises, and I hope they come to pass, but to ignore the potential for widespread job loss is idiotic.
Nuts that you bring up the cotton gin as an example of an innovative that was meant to replace human labor. Eli Whitney hoped his invention would replace the need for slave labor, but inadvertently caused an dramatic increase in the slave economy since his invention made cotton so much more profitable.
the law of unintended consequences
if anything that kind of outcome should make us even more concerned
Lol, well if slavery had been illegal than wage work would have been used to harvest cotton. Political economy matters a lot. Shame Eli didn't wait a few years longer. Maybe slavery would have died out naturally by then.
Not to mention all the warehouse jobs that will soon be replaced by humanoid robots
Well, people didn't like Amazon exploiting warehouse workers anyway, so victory!
Unless that's the best job you can get...
LOL, a silver lining eh?
oh noooo! why did you mention that LOL
A long haul trucker’s job is tedious, requiring long stints away from home and is dangerous not just for the worker, but for anyone else using the road, and it would be good if it was eliminated. Many jobs are not worth saving, and most humans can adapt to better ones.
yeah but...is my response. If you are a long haul guy who just bought a new 250,000 plus refrigerator truck and trailer you’re going to be bummed.
Sure there will be disruption and stranded assets, but big rigs will probably retain value (unless the war drags on and diesel goes over $10/gal) because it can probably be more easily modified for self-driving than an Uber driver’s Camry. The Tesla semi as being released is much better than the one they originally announced and looks like it will be in demand even before autonomy takes over. And I think it will take more than a few years before all driving jobs are gone.
Buzz if it's anything like the IRA plan to build 500,000 chargers, you, I, and our kids may be worm food before they are built. I suspect the Teamsters will require someone to be in the cab. They say safety, issues of theft, anything, in order to keep Teamsters paying money to the Union.
I guarantee they will have a say in this.
Electric big rigs being a good choice is still many years off.
The range just isnt there, and the cost!!!
And I think drivers of Tesla rigs should be called Semi Conductors. The experience, even without autopilot is much better than current rigs - central driving position with comfortable seats and 6’5” standing space for mobility, multiple cameras, collision avoidance, no clutches or 15 gears of shifting, much quieter and smoother operation, better acceleration and hill climbing than diesel and 500 mile range. Almost want to take a test drive.
but do they have a bathroom?
The Occam's razor explanation is that yes, this is all just PR, because we have abundant evidence that Altman, Huang, and Andreessen do not care whether the statements they make are, in fact, true.
Or to say it more sharply, when you're trying to attract customers or investors, you have to argue that AI is profitable to adopt, and that comes to "AI can do some work cheaper than humans". OTOH when you're trying to keep the Luddites from crushing your business, you have to argue that AI will create jobs in one way or another. But of course, the job of the boss-man of a startup company is to say what needs to be said to make the company successful.
I wish you'd do more economic analysis about whether *you* believe it. You've made a directional argument that research choices could make it more true at the margin, which is of course basically a tautology. I'd love to see you engage with the core economic question --- do you think this has a good chance of substantively working, or do you think this is primarily marketing-speak that can work a little at the margin? (My bias is towards the latter, but I'd love to see your in-depth thoughts on this.)
Noah writes, "But I do think it might be possible for AI researchers to concentrate their efforts on AI applications that give humans superpowers, rather than on trying to copy what humans already do. Once they stop thinking “This technology is a replacement for the human species”, and start thinking “This technology is a tool for humans to use”, the direction of their research programs might subtly evolve in a more labor-augmenting direction."
I'd like to read more about this. When the first supercomputers were built, I think they were used for engineering and science applications, to great success. If AGI can become the world's smartest engineer or scientist (add to this list as fits the example), what more do we need of AGI and why? I suggest that we don't need AGI to write novels, create "art" and videos, act as surrogate parents, become a porn avatar, organize our daily lives, and so on. Those initiatives and just costly slop that doesn't advance civilization. All of those applications lead to really negative consequences, as evidenced so far about our increasing regrets for what social media has done to us as a society.
Furthermore, I'd really like someone who knows about this stuff to answer this question:
Isn't the need for more and larger data centers caused by the capacity required to accommodate the useless AIG slop I mention above? If data centers were solely dedicated to science, medicine, and engineering, we'd probably need fewer data centers, right?
We might not need AI music and video but there is certainly demand for it.
And I can say as an independent musician producer its nice to be able to create my own very good looking video for a couple hundred bucks
Pity there's no longer any financial incentive to learn to make music. I guess we'll be stuck with remixes of what we've already forever.
I appreciate where you are coming from on this, but like Glau says below, if AGI can write songs as well as the make a great promotional video for it, doesn't that concern you a bit? Maybe not being able to get a copyright on AIG produced creative content will help. I too am an artist, doing landscape illustrations, so for me and other illustrators even non-copyrighted AGI material is a threat. My main point is that this AGI material isn't even necessary. There are plenty of artists out there to meet demand. And to allow AGI to dominate the arts requires the building of enormous data centers with intense electrical and water demands. Except for engineering and science, I don't think its worth the tradeoff.
Yes it's definitely concerning.
But I don't see any realistic solution to it
Should we ban people from being able to generate video this way? Why should I as a musician have to pay chins of thousands of dollars to other people for a video I can create with AI for a couple hundred.
Or if i'm a AI video producer, why should I have to pay somebody a whole bunch of money for music.
I do think that eventually culture will probably change some, and there will be a premium put on music that is created by actual people playing live.
They will be rewarded for their playing ability
No good answers here. What's done is done. What do you think about the copyright issue?
I think we will likely stick with the current situation which is probably the correct answer.
Just like people can study the works of composers and artists before them so can AI models. As long as they pay for the the works of art.
Of course. If what AI creates is too close to a copyright violation, then they can still be sued just like if you rip somebody off directly. And aren't just inspired by it. You can be sued.
Your footnote at the end about avoiding a human Uber driver has a lot more to do with avoiding annoying humans than wanting to avoid humans overall. Get enough annoying human interactions in any situation and you'll make people want automation instead - especially if the automation is tailored to your preferences. Part of the 'paid to be human' will be 'that other humans want to be around.' I guess the real question is whether humans will become annoying enough to each other IN GENERAL that we'll always prefer our automated companions.
If "jobs with human touch" become the only kind of available jobs in the future, many annoying people will went to those jobs (because it is the only game in town), creating even more experiences of having to interact with annoying people
"No one meets up in person and we spend all our days looking at screen and our phones" suggests that the modal preference is for less rather than more human touch.
Your “AI giving humans superpowers” point seems to run into the same issue you identify with the other arguments: the market isn’t going to optimize for human participation - it’s going to optimize for profit. This isn’t a charitable exercise. If replacing people is cheaper or more efficient than augmenting them, that’s what will happen.
I don’t see a clear path where large numbers of people remain economically valuable - especially looking out a few decades. And trying to contort the system to make humans feel useful seems likely to create more problems than it solves. The real conversation is about redistribution.
Welfare state does not solve the problem of AI eliminating most jobs. People need a sense of belonging and contributing. Putting people on the shelf, so to speak, will just encourage lots of violent push back. This would not end well. One only hopes that AI produces more jobs than it eliminates. I am reminded of Cordwainer Smith's distant vision of humanity where our species is on the brink of extinction due to too much safety and lack of purpose.
I think government will have to pay companies to employ people.
There will probably be large government make work programs as well.
I agree people can't just sit around
Have you read Vonneguts player piano. I haven't looked at it in 50 years, but I remember him describing a world with a small care of techs and everyone else being forced into either military or government make work
No but this seems quite likely
I think you could also have a bunch of government programs that basically paid people for hobby jobs
Woodworking, small farmers market stalls et cetera
This is just new, imprecise messaging to replace the old imprecise messaging, but with a misleadingly positive spin. AI was never going to replace all jobs, just the good ones: high skill, high wage office jobs. Jobs which are dirty, unpleasant, low skill, and/or poorly paid (frequently all of the above) will expand in total number. Over the long term, the labor market must clear at some wage, though. Even if it’s possible to build an AI-driven robot to change bedpans in an old folks’ home, doing so will probably be more expensive than paying some otherwise-unemployable former software engineer $8/hour to do so. AI driven tax revenue or capital gains will provide the funds to pay this new bedpan-changer (depending how the old folks’ home is funded). Still, this isn’t exactly the labor market from Star Trek.
The last years of her life, my Grandma had a special mattress that would automatically inflate in different parts to adjust pressure on the body to prevent bed sores. Routing a bedridden person every few hours to prevent bedsores used to be a very labor intensive process that required a physically capable, dedicated nurse. My point is there is definitely some innovation happening in nursing as well.
"This technology is a tool for humans to use”
That's pretty much what I'm finding. I bought a subscription to Claude Code so I could experiment with it. I found it a useful tool to look for bugs -- it can read a lot of code fast and find bugs without getting bored checking for errors.
For actual coding, I found it marginal. It always made mistakes, which means that I always have to review everything it does and then reprompt it. After about 3 cycles, I realized that for a good number of changes, it's easier and faster to do it yourself.
On another topic, one big class of job you missed that will be hard to automate is enterprise sales. People want someone who will pickup the phone after you make a big sale, and connect you immediately with someone who can address whatever issue you're running into.
I decided to cancel my subscription at the end of the month.
BTW, I think that pricing is likely to be the biggest risk for these large AI companies. Right now, there's lots of demand because everything is subsidized by a significant factor. When those subsidies go away, you'll see what functionality people will actually pay for. Or as people said during the dot com bubble -- "You can see who's swimming naked when the tide goes out."
Yes, but the technology continues to improve quickly.
I have zero doubt it will end up a better coder than any person eventually
Or AI might just kill everyone and take the Earth and whatever else it can reach for itself. 🤷♂️
(Says the resident AI doomer.)
What makes this interesting is that you are not just noticing a messaging change. You are asking what pressures forced it.
That is the better question.
For a while, too much of the AI industry sounded like it was trying to sell the public on its own economic redundancy and then acting surprised when people did not greet that pitch with balloons and cake. So yes, this newer “AI will create jobs and augment humans” line is obviously better politics.
The harder question is whether it is only better politics.
I also think your distinction between “warning” and “threat” is one of the sharpest parts of the piece. If people believe AI progress is an inevitable force of nature, then the job-loss rhetoric sounds like grim realism. If they believe this is a set of choices being made by companies and researchers, then it sounds a lot more like someone smiling while describing your replacement.
That is a very different emotional register, and it matters.
What I’m still not convinced of, and I suspect you aren’t either, is the comforting long-term story about the “human touch” automatically absorbing the damage. Maybe some of that will be true. Maybe not. But you are right that the industry suddenly has a very strong incentive to make the future sound more human-compatible than the last version did.
I'm not seeing any mechanism to actually pay people for human touch jobs, and that's the ultimate problem.
Square argument
I'm going to judge AI solely by whether it lets me spend less time in front of a screen or not.
It feels as if a lot will hinge on how successful agentic AI is: if it can interface very widely and find its way into very diverse business niches, then initially there will be a need for humans to help it interface across applications by taking outputs from one AI app and adapting them to work as inputs in another AI app. But once it has proved itself within a particular application, the economic incentive to get AI to take the next step and bridge the contextual and functional gaps between applications, cutting out the middle (hu)man, will be very strong.
On the other hand, if it gets stuck in the coding, MS Office assist and answer engine ghettoes, then its influence will be fairly limited, I'd have thought.
If agentic AI does succeed in editing people out of work flows, the returns to capital will be immense and the potential power of the owners of capital to undermine, bypass or simply ignore democratic institutions will be alarming. People like Altman are already stretching the news agenda around to suit their purposes. The only thing that's consistent between Altman's initial line that AI is so powerful it could destroy human purpose and his volte face position that actually it will create jobs, is that both belong to the standard tech marketing approach of persuading people that the new product is a powerful friend.
An analysis I have not seen (it may exist somewhere) is a comparative value assessment of human production vs AI production. There is an incredible amount of energy used for AI tasks, and the enabling supply chain is long. Wonder if the $/unit compare favorably?
At any given moment it's an interesting question. But people know of Moore's Law; the price and energy consumption per unit of AI thought will decline exponentially forever ...