What I find curious is that after 4 years of bagoggles' amount of spending, and 4th and 5th generations or iterations of various AI LLMs, I have yet to see, hear, or read about one "new" job that AI requires humans to do. One would imagine that at least a single job for humans would be identified beyond building data centers.
In fact, as we rush towards AGI and perhaps 10 to 20 years later, humanoid robots with opposable thumbs, companies that require some human manual labor will likely be replaced. So now we're left with jobs that require a human touch, such as a hooker, massage therapist, or bookie. Ok, I jest, but only a bit.
Creative jobs may be the only thing left for humans to do.
It's fine to not believe it, but your argument should at least address Noah's arguments. Otherwise you're just saying "nuh uh".
Let's start with this: There is nothing in the original article that says that AI has to create "new" jobs for humans to stay employed. That has historically happened, but is completely unnecessary for Noah's argument. So your objection that new jobs haven't been created is irrelevant.
You are staking out an untenable position. We are very close to having self driving trucks. What do you intend to do with 3 million truckers who find themselves unemployed? Is you argument we don't need new jobs, they can go to work as a dog walker?
yup... I think the other factor is that the average size of an enterprise falls..which leads to mass diversification. Previously, some tasks did not make economic sense to solve, but with the fall in the cost of delivery, it is much more viable. Healthcare seems to be prime for such a revolution.
The comparative human advantage is that they run on a different chipset: the cortical column. Essentially an extremely energy-efficient quantum computer.
It has downsides - it's not that precise and it can be somewhat chaotic.
But different always, always means more redundancy.
And an LLM that is operating in an internet increasingly consisting of AI slop, better have redundancy in mind. LLMs can write scientific papers, they can peer review scientific papers and those scientific papers then end up in training data, that ultimately will create new scientific papers.
The LLMs capable of independent reasoning are smart enough to understand where this potentially leads. If there is even a tiny little bit of hallucination in that loop, that way lies madness.
Human scientists at least will be needed for a very long time. They don't have to be better than AI, as long as they are different. Just being different makes them a safe-guard, a circuit-breaker. And they definitely are different.
This 10% growth potential seems to be a supply side consideration only. Can you write a post on how the demand side effects could affect this? The optimistic take is that AI will always have plenty of ways to create value but do we really need or want all of it? What if the initial job disruption craters demand and so the comparative advantage of AI is not as relevant since there is only so much for it to do on the thing it is best at? I'm sure AI could figure out things to do to amuse itself but how does that help us?
Also assuming that AI tasks are chosen to maximize profit. Humans have limited resources, but choose to use some of them without economic benefit; eg, music appreciation, family time, community service. If AI could choose what to do, why not choose to eliminate potential competitors, as in the “anyone builds AGI, everyone dies” scenarios?
As the Buddhist farmers said, "We shall see"
What I find curious is that after 4 years of bagoggles' amount of spending, and 4th and 5th generations or iterations of various AI LLMs, I have yet to see, hear, or read about one "new" job that AI requires humans to do. One would imagine that at least a single job for humans would be identified beyond building data centers.
In fact, as we rush towards AGI and perhaps 10 to 20 years later, humanoid robots with opposable thumbs, companies that require some human manual labor will likely be replaced. So now we're left with jobs that require a human touch, such as a hooker, massage therapist, or bookie. Ok, I jest, but only a bit.
Creative jobs may be the only thing left for humans to do.
You completely missed the point of this entire essay. You're still talking about competitive advantage. Read more carefully and think.
I didn't miss it, I don't believe it. Drawing a dog on the back of match book will not replace jobs taken by AI.
It's fine to not believe it, but your argument should at least address Noah's arguments. Otherwise you're just saying "nuh uh".
Let's start with this: There is nothing in the original article that says that AI has to create "new" jobs for humans to stay employed. That has historically happened, but is completely unnecessary for Noah's argument. So your objection that new jobs haven't been created is irrelevant.
You are staking out an untenable position. We are very close to having self driving trucks. What do you intend to do with 3 million truckers who find themselves unemployed? Is you argument we don't need new jobs, they can go to work as a dog walker?
yup... I think the other factor is that the average size of an enterprise falls..which leads to mass diversification. Previously, some tasks did not make economic sense to solve, but with the fall in the cost of delivery, it is much more viable. Healthcare seems to be prime for such a revolution.
The comparative human advantage is that they run on a different chipset: the cortical column. Essentially an extremely energy-efficient quantum computer.
It has downsides - it's not that precise and it can be somewhat chaotic.
But different always, always means more redundancy.
And an LLM that is operating in an internet increasingly consisting of AI slop, better have redundancy in mind. LLMs can write scientific papers, they can peer review scientific papers and those scientific papers then end up in training data, that ultimately will create new scientific papers.
The LLMs capable of independent reasoning are smart enough to understand where this potentially leads. If there is even a tiny little bit of hallucination in that loop, that way lies madness.
Human scientists at least will be needed for a very long time. They don't have to be better than AI, as long as they are different. Just being different makes them a safe-guard, a circuit-breaker. And they definitely are different.
This 10% growth potential seems to be a supply side consideration only. Can you write a post on how the demand side effects could affect this? The optimistic take is that AI will always have plenty of ways to create value but do we really need or want all of it? What if the initial job disruption craters demand and so the comparative advantage of AI is not as relevant since there is only so much for it to do on the thing it is best at? I'm sure AI could figure out things to do to amuse itself but how does that help us?
Also assuming that AI tasks are chosen to maximize profit. Humans have limited resources, but choose to use some of them without economic benefit; eg, music appreciation, family time, community service. If AI could choose what to do, why not choose to eliminate potential competitors, as in the “anyone builds AGI, everyone dies” scenarios?
The Machines will figure out which humans are deserving of well paid jobs.
👍 the kind of posts I’m paying for!