I remember that cover on Wired magazine in 1997 with the big Smiley and the title 'The Long Boom' that was the ultimate optimist projection. I read it on a sunny beach in Jamaica.
As a child of the Cold War, I grew up with TWO opposed futures. One was the moon landing and the Jetsons and flying cars. The other was 'The Day After' and 'A boy and his dog'... post-apocalypse nuclear wasteland.
Which future would I get? Flip a coin, kid.
When the Cold war ended, we still didn't have talking robots or flying cars. But we did have the internet and early cells phones. We did have a tech boom promising the sun and the moon and the stars (even if it took 20-30 years to deliver).
And suddenly, we went from DOOM to BOOM. That was a huge reset of expectations.
No wonder the Matrix is set in late 1990s Chicago.
It wouldn't hurt a bit for us to promote optimism, whenever we can and however we can and as often as we can. It looks pretty bleak right now, but I think there were many, many times in our history when things probably looked to those who were faced with some new calamity, a lot whole lot bleaker. We can create an absolute mess of things for sure. But we wouldn't be where we are today without our ancestors having persevered all these centuries.
Nice thought, eh? I just hope I'm up to the task at age 67 to help. I only have about 18 years to live, so I have to make all of them count. I'll probably be checking out at around 85, if my family history is a guide, hence my email address: tjennings2044@gmail.com. Probably won't need it after that year.
All I really know to do is treat people decently, do the right thing when the choice is presented, hold to your integrity, and love. Probably would be a good idea to stock up on toothpaste and toilet paper too.
Isn’t it plausible that whatever future employers value, achievement in school will still have a positive signaling effect? Ergo, work hard and accumulate accomplishments is still the appropriate dominant strategy
Yeah, but maybe it won't be and the only people who are able to get ahead are in the trades or something. People want to hedge so their offspring has the best chance.
I know very little of what it is involved to get into the trades, but if it possible to learn a trade in a summer or year of unpaid or semi-paid volunteer, maybe a gap year or summer working with a tradesmen is a good hedge.
On the family question: I have never imagined such a question. Or the framing of “successful life” from the point of view of a parent in such a weirdly careerist way. As a parent, I think, one expects one’s children to pursue whatever life they want so long as it is not dishonest or dishonorable. The goal for a child is that it be first kind, and then responsible, and then a contributor to society — in that order of importance. The last is pure gravy. The first mandatory.
I expect them to go to college and study the liberal arts if the study of the liberal arts is part of their idea of the good life. And why wouldn’t it be?
Might I add that the only financial requirement one should demand of their children is to at least make enough money not to be a burden on anyone else. They would likely find it to be terribly difficult to find happiness if they didn't and would have no guarantee of happiness if they made more. Or something like that.
That is what “responsible” means to me… or rather not being a “burden” as you put it is included in my idea of what being responsible entails…. I am not sure “happiness” is a reasonable goal regardless.
I guess it depends on what one means by “succeed in life”? I am a parent and if a child turned out to be an unkind, irresponsible astronaut or Nobel prize winning chemist or venture capitalist or whatever, I would not consider that succeeding in life. A kind, responsible person in any honorable profession whatsoever seems like a success to me…
Responsibility covers a number of bases… but I dont think it is too vague to be useful…. And I can think off top of head several astronauts who may have fallen short. Not to go down an astronaut rabbit hole..
"Who could have predicted, in 1890, what life in 1990 would look like? And the AI revolution is happening much faster, promising to compress a century’s worth of change into a couple of decades."
Let's look at some things that have already happened, and their impact:
Surgery with anesthesia -- 1840s.
Electrification in cities -- 1910s
Cars surpass horses for transport --1910s
Electrification in rural areas -- 1940s
First antibiotics -- 1940s
Miniaturized electronics (transistors) 1960s
Personal computing -- 1980s
First commercial internet sites -- 1990s
Google / cell phones -- 2000s
AI -- 2020s
My guess is that almost any era since 1900 would be surprising to someone from 40 years earlier. And people have always worried about what the future would hold. In my lifetime, I've been told to worry about:
Nuclear war
AIDS and drug resistant bacteria
The USSR taking over Europe / Commies everywhere
Overpopulation and the resulting starvation (by 1970s!)
Underpopulation and the collapse of the human population (by 2100s)
World War III (1960s, again in the 1980s, and again now)
Global cooling (in the 1960s)
Covid pandemic
Global warming
The loss of the ozone layer
I'm reminded of one of my favorite parts of the Sea of Tranquility:
"My point is, there's always something. I think as a species we have a desire to believe that we're living at the climax of the story. It's a kind of narcissism. We want to believe that we're uniquely important, that we're living at the end of history, that *now*, after all these millennia of false alarms, *now* is finally the worst that it's ever been, that finally we have reached the end of the world."
"But all this raises an interesting question. What if it always *is* the end of the world? Because we might think of the end of the world as a continuous and never ending process."
IOW, I'm guessing that somehow we'll muddle through the AI revolution, too.
This is a bit American-centric, someone who lived in Taiwan or Korea in 1890 was likely not even aware of electricity, and yet by 1990 both countries were manufacturing powerhouses. As they say in Neuromancer, future is unevenly spread.
It would certainly be somewhat interesting if the AI revolution managed to uplift some hitherto unknown peripheral nation to the spotlight, much like the car revolution made the oil sheikhdoms fabulously rich.
Aside from Anguilla making a bank on their .ai TLD, I am not seeing it yet, though.
we’ve never lived in a world where the economic payoff to expanding your knowledge base approaches zero. That’s such a radical break from history. Most/many economists instinctively assume the old pattern will repeat: people will move into new, higher paying work. Maybe. But this time, it’s not a particular field that’s being replaced (slide rules, bank tellers), it’s knowledge and intelligence themselves. Economists like to begin their first day class with talk about scarcity. Well, at some point (20 years?), knowledge and skill may not be scarce.
I don’t know, I think that the millennia of subsistence farming were a time when the economic payoff to expanding your knowledge base was pretty close to zero.
That’s a good point. But maybe just more reason for gloom. In that example, people were uneducated and mostly stayed that way as there was no payoff to learning. Today, education systems are already in place, but if there’s no economic payoff to learning because of AI, my hunch is there will be a drift towards ignorance again. Knowledge for its own sake won’t be reason enough for most people.
Putting outcomes like robot takeover and genocide off to the side, there will always be groups like the Amish. Heck, this might be the founding era of a dozen new movements that freeze technology usage at the level of 2015.
Unless the AI revolution turns out to be complete nonsense and the models never achieve ASI, the following chain of events seems inevitable to me. The owners of the models retreat from offering "tools" to buyers SaaS-style and use their ASI to begin taking over the numerous vertical markets ripe for ASI displacement (e.g., law, accounting, marketing and advertising, business strategy, trading, lending and collection, advanced manufacturing control, etc.). The political system will respond with various forms of job protection for the white collar workers in these industries, because the scale of the disruption among the politically salient classes will be too vast to ignore and the gains to the winners of the AI race will be far too concentrated to seamlessly replace job losses with capital gains (assuming there is still anything like a functioning stock market any longer with the level of wealth concentration this implies).
The white collar / political classes will win this battle because the displacement will make voters of all classes hate the AI / SV Investor / Davos class even more than they already do, and they will un-elect any politicians who continue to shill for that class.
This will matter basically everywhere, including China, which will have a relatively easy time incorporating state-controlled and owned ASI into continued manufacturing and growing military dominance, while the US, Europe, Japan, Korea struggle and hand-wring over private v. public ownership of "the means of production", worrying about what will by then be quaint notions like "incentives" and the relative competence or efficiency of public and private organizations.
The longer it takes the west to resolve this struggle definitively in favor of public ownership and control: 1) the farther ahead China will propel itself; 2) the longer it will take the west to contain the edge risks of unconstrained AI aimed at destructive ends (assuming this is possible); and 3) the longer it will take us to figure out a politically and culturally acceptable framework for the governance of ASI and the equitable distribution of what we should expect will be the beneficial products of ASI, from increased crop yields and cheaper protein sources, to vastly improved and cheaper medications, to cleaner air and dramatically slowed global warming. Once true ASI can be turned to solutions that generate the best outcomes for humanity (and away from what generates the highest returns for limiteds in A16Z, et al.), there is reason to be hopeful. But the longer it takes us to remove it from the hands of the likely future owners of ASI (not necessarily the technologists who will bring us ASI), the less hope there is.
When you consider how much we will be competing against them for electricity use, it's hard to see why it/they would keep us around after there are enough mechanical robots to build more data centers. In the meantime, we will end up doing what ASI tells us (a growing number of people already do this with LLMs), so we are already on the path to becoming (expensive, unporedictable) "robots".
You seem to be sure that China and other authoritarian nations will pick up the best AI for that purpose, and not the AI most favored by ideologues, or developed and sold by someone's well-connected nephew.
AI isn't the first technology that could theoretically work better in hands of an authoritarian leader who does not have to engage in internal "hand-wringing". So do many industrial technologies. The USSR was built on rapid electrification and industrialization decades before computers were even a thing.
But the experience so far has been that when politicans and ideologues pick the winners, they tend to pick worse than the market. Again, the USSR is an interesting example. "Cybernetics" was proclaimed bourgeoise pseudo-science and anyone who tried to study it was an enemy of the state. This damaged the future IT sector in the USSR so much that it never recovered.
Looking at the heavy-handed approach of Xi to Chinese software industry, I am not convinced that he can escape the same trap of "I am the Dear Leader, I know better".
"You seem to be sure that China and other authoritarian nations will pick up the best AI for that purpose, and not the AI most favored by ideologues, or developed and sold by someone's well-connected nephew."
The technology is going to converge on relatively uniform approach to achieving ASI. There may not even be one path, let alone a half dozen that "the market" will select among. Once ASI is achieved, the future path of its development will be out of human hands, because there is no reason to trust human intelligence if you have access to super intelligence. So it's not going to be a question of "picking winners", it's going to be a question of what to do, and in what order, once you have ASI. Leaving a question that profound to be answered by a bunch of private actors who have consistently demonstrated zero to negative concern for the material interests of the public at large would be deeply inhumane and irresponsible.
That leaves open the extremely important question of who and how the use of the technology should be governed, and it's true that -- in the US at least -- we do not yet appear to have the political or regulatory structures that can relied upon to protect the public interest rather than the interests of the criminals and clowns of the Trump administration, for example. But that is the fault of our current wildly incompetent Supreme Court and its absurd rulings, not a "deep" Constitution defect.
As someone who has an intense interest in the progress of geroscience (medically targeting aspects of the biology of aging to ameliorate or prevent age-related illness and health decline), I was delighted to see your mention of "ending aging and disease" as a possibility for the future. Indeed, just a couple weeks ago, ARPA-H announced the grantees of its PROSPR program, which aims to create FDA-accepted surrogate endpoints to run clinical trials on "aging" and then run several clinical trials with both repurposed drugs and next-generation interventions. FRONT (functional replacement of neocortical tissue) should announce its awardees in the next few months. Thank god ARPA-H has been somewhat shielded from the chaos at NIH/NIA and other areas of HHS. Various startups are also running early-stage clinical trials.
On the topic of happiness, I was a huge techno-optimist during the Biden administration and very positive about the future. Ukraine was holding its own against Russia (and still is thankfully, but it's shaky), Trump had been defeated, billions of dollars in private capital were creating startups targeting aging biology (Altos Labs, Retro Bio, Life Biosciences, Cambrian Bio, and dozens more), ARPA-H was created with inspiring moonshots in medicine and biotechnology, we achieved the soft landing, inflation was subsiding. But then Trump won again. Not only that, he carried every swing state (including Nevada) and even came in first in the popular vote. That proved to me that voters and the people around me could make absolutely disastrous decisions. They would vote for a wolf if they thought the wolf would be the better choice economically. What's worse, Trump's proposed policies were worse than Harris's on issues voters cared about, like inflation. Trump's waxing about tariffs, lower interest rates, larger deficits are all ideas that put upward pressure on prices. Yet voters chose him. Even if we get through Trump 2.0 with our democracy intact, I am absolutely not confident that voters won't choose someone ideologically post-liberal, highly capable, and focused enough to seize power at some point in the coming decades. The way the billionaires, media companies, and even many universities readily caved to Trump's pressure was disheartening too. Only Trump's incompetence has squandered some of the goodwill and opportunity for postliberalism to swallow our government and country now. But if someone highly capable and intelligent comes in for another try in the future, all bets are off. That is what has fundamentally broken my sunny optimism about the future and made me feel more guarded about what the next decades will bring. What's especially saddening is that this was all a choice and could have been avoided.
I remember around the early 2000's knowing (intuitively) the housing bubble was going to pop and pop very badly. Among even pretty savvy friends, family and acquaintances I thought were rational and thoughtful, I was tagged a churlish Cassandra.
Not sure what exactly, drove my disbelief of a free lunch for everyone, except I'm old enough to have listened to family members' first-hand recounting of their experiences of the lead up and misery of the Great Depression.
Yes- you can go broke before the market gets rational,
Problem with monetary mistakes is they kill investment , which makes keeping things intact or rebuilding them more difficult.
Spend today because saving or investing for tomorrow makes no sense under inflation. And when you do get a rational government in, real interest rates will be very high for years as trust needs to be rebuilt. Endless cycle in Latam
If AI destroys jobs the way some predict I envision3 possible scenarios. One was laid out by Vonnegut in his novel Piano Player. Its been decades since I read this but he envisions a world where at age 16 people take a test and high scorers go to technical college for training and everyone else has the option of the army or something like WPA make work -- just to keep them out of the upper echelon's hair and promote social stability. A second scenario would be a military dictatorship something like a fascist combination of high tech, government and the ruling elite. And the 3rd scenario is mass chaos. There is no reason to believe that in such a world people would go quietly into the night.
Noah, One of my favorite things about reading your articles is how you totally 360 every aspect of the subject matter! This article is the Piece de Resistance of 360’ing every aspect of the subject matter. Whether I’m depressed or not when ya finish!💥💪😆🤣🤣🤣🤣🤘🏆💯💯💯💯💎💯💯💯💯
I think this entire essay can be summarized with one short sentence: 'people hate uncertainty.'
I think this favors the conservatives. If voters feel like the ground is moving under them they will lean towards the party promising to not change things. Maybe that's actually the dems now with their hatred of data centers or maybe there's some horseshoe effect going on and parts of both parties are converging on a hatred for AI.
I think AI becoming unpopular with voters will not turn out well for the US in the long term. If we allow China to bypass us I don't see a single sector where China won't dominate, plus they'll have more advanced AI to leapfrog the US and the west anywhere they don't currently have an advantage.
My well-educated children and their spouses aspire to be ex-pats. They either are or are working on it.
My personal version of an alternate history would have been Gore as President during 9/11. Much of Bush's response was authoritarian, plowing the road for Trump.
My father emphasized to his sons (in the early 1970s) that we would have to be prepared to be flexible, that everybody would have four or five careers. The irony of that is that he was a tenured professor, which was one of the few jobs left with job security (you couldn't be laid off due to your employer losing money). And I was a computer programmer for my entire career. But he favored a "liberal arts" education, learning a wide range of skills whose primary value was *learning other skills*.
But if your goal is "How will you make sure your kids have a successful life?", that is, moving them up the socioeconomic ladder, there's reason to follow the current "intensive parenting" model, working hard to inject your children into higher and higher socioeconomic communities. https://www.theatlantic.com/ideas/archive/2022/10/intensive-parenting-kids-happiness-health/671782/ -- All of the habits they learn and the contacts they make will tend to keep them in those higher strata regardless of the economic upsets coming down the road. (A detailed study of the New York City school choice system showed that parents are insensitive to the educational qualities of schools and very sensitive to the socioeconomic status of a school's students -- and that that SES determined as much of students' "outcomes" as educational quality.)
My father, born in 1928, taught us to acquire core skills of reading, writing, and analysis. He was (unfashionably) very skeptical of the idea that employers had any loyalty to their employees.
People are nostalgic for when they were young. OTOH, when you are young, you have parents laboring to provide you with all sorts of resources and comforts. Adult life has a much worse benefit/labor balance!
This is the first technological revolution, I think, where the people behind it explicitly are promising that the best case scenario is mass economic upheaval the likes of which we haven’t seen in nearly two centuries and possible mass joblessness, starvation, etc…and the worst case scenario is the extinction or brutal subjugation of all life on earth. Any wonder people with eyes to see are anxious and pessimistic? We should announce a moratorium on AI and launch anyone who refuses to comply into the sun.
Going back to the Luddites, technological revolutions have always threatened "technological unemployment". Probably the illuminating question is why in the late 1900s was the fear of technological unemployment so muted?
A lot of secretaries were put out of work by the PC. It also created lots of new jobs. But the PC wasn’t being explicitly touted as a tool to replace all human labor, both white and blue collar. When you look at the thinking of top AI CEOs and thought leaders, you see an increasingly terrifying philosophy. “We have to build it because if we don’t someone else will” and “we’re not sure if it will result in human serfdom under AI CEO overlords because they capture all the efficiency gains, or maybe human serfdom under AI tools that go rogue” and “maybe humans should just die out” and “maybe humans should all upload their consciousness to the cloud”.
Nobody voted for these leaders, nobody is asking to have a world built where humans must cease to exist so AI software can rule the universe. Most humans are not sanguine about the prospect of the extinction of the species. The people in and around this space are so exceedingly weird that it’s hard to even know what to say. But people like Musk and Altman are about the last people you should ever trust to hold the fate of all humanity in their grubby hands.
Whether they're weird or whether we voted for them is irrelevant. The world is a big place and many, many people are going to develop AIs even if everybody in e.g. the US decides not to. At least the people doing it are warning us that it might cause problems.
New technologies have always been like this, like viruses released into the economy.
I remember that cover on Wired magazine in 1997 with the big Smiley and the title 'The Long Boom' that was the ultimate optimist projection. I read it on a sunny beach in Jamaica.
Here is their retrospective two years later. https://www.wired.com/1999/09/boom-2/
As a child of the Cold War, I grew up with TWO opposed futures. One was the moon landing and the Jetsons and flying cars. The other was 'The Day After' and 'A boy and his dog'... post-apocalypse nuclear wasteland.
Which future would I get? Flip a coin, kid.
When the Cold war ended, we still didn't have talking robots or flying cars. But we did have the internet and early cells phones. We did have a tech boom promising the sun and the moon and the stars (even if it took 20-30 years to deliver).
And suddenly, we went from DOOM to BOOM. That was a huge reset of expectations.
No wonder the Matrix is set in late 1990s Chicago.
Yes I remember the freak out about Y2K and that seems incredibly lame compared to today’s anxiety about technology and AI!
It wouldn't hurt a bit for us to promote optimism, whenever we can and however we can and as often as we can. It looks pretty bleak right now, but I think there were many, many times in our history when things probably looked to those who were faced with some new calamity, a lot whole lot bleaker. We can create an absolute mess of things for sure. But we wouldn't be where we are today without our ancestors having persevered all these centuries.
Nice thought, eh? I just hope I'm up to the task at age 67 to help. I only have about 18 years to live, so I have to make all of them count. I'll probably be checking out at around 85, if my family history is a guide, hence my email address: tjennings2044@gmail.com. Probably won't need it after that year.
All I really know to do is treat people decently, do the right thing when the choice is presented, hold to your integrity, and love. Probably would be a good idea to stock up on toothpaste and toilet paper too.
Isn’t it plausible that whatever future employers value, achievement in school will still have a positive signaling effect? Ergo, work hard and accumulate accomplishments is still the appropriate dominant strategy
Yeah, but maybe it won't be and the only people who are able to get ahead are in the trades or something. People want to hedge so their offspring has the best chance.
I know very little of what it is involved to get into the trades, but if it possible to learn a trade in a summer or year of unpaid or semi-paid volunteer, maybe a gap year or summer working with a tradesmen is a good hedge.
On the family question: I have never imagined such a question. Or the framing of “successful life” from the point of view of a parent in such a weirdly careerist way. As a parent, I think, one expects one’s children to pursue whatever life they want so long as it is not dishonest or dishonorable. The goal for a child is that it be first kind, and then responsible, and then a contributor to society — in that order of importance. The last is pure gravy. The first mandatory.
I expect them to go to college and study the liberal arts if the study of the liberal arts is part of their idea of the good life. And why wouldn’t it be?
Might I add that the only financial requirement one should demand of their children is to at least make enough money not to be a burden on anyone else. They would likely find it to be terribly difficult to find happiness if they didn't and would have no guarantee of happiness if they made more. Or something like that.
That is what “responsible” means to me… or rather not being a “burden” as you put it is included in my idea of what being responsible entails…. I am not sure “happiness” is a reasonable goal regardless.
I’m unclear, are you a parent? You haven’t thought about how to help your children succeed in life? That’s the first line of the job description.
I guess it depends on what one means by “succeed in life”? I am a parent and if a child turned out to be an unkind, irresponsible astronaut or Nobel prize winning chemist or venture capitalist or whatever, I would not consider that succeeding in life. A kind, responsible person in any honorable profession whatsoever seems like a success to me…
I see what you are saying, but perhaps my view of success is best stated this way: There are no irresponsible astronauts..
I dont think that is correct in practice. https://en.wikipedia.org/wiki/Lisa_Nowak
Responsibility covers a number of bases… but I dont think it is too vague to be useful…. And I can think off top of head several astronauts who may have fallen short. Not to go down an astronaut rabbit hole..
Many, many parents see their children's futures as the extension of their own ambitions. The Suburbs With Good Schools are filled with such families.
Oh, yeah, did not mean to imply that Philistines don’t exist.
"Who could have predicted, in 1890, what life in 1990 would look like? And the AI revolution is happening much faster, promising to compress a century’s worth of change into a couple of decades."
Let's look at some things that have already happened, and their impact:
Surgery with anesthesia -- 1840s.
Electrification in cities -- 1910s
Cars surpass horses for transport --1910s
Electrification in rural areas -- 1940s
First antibiotics -- 1940s
Miniaturized electronics (transistors) 1960s
Personal computing -- 1980s
First commercial internet sites -- 1990s
Google / cell phones -- 2000s
AI -- 2020s
My guess is that almost any era since 1900 would be surprising to someone from 40 years earlier. And people have always worried about what the future would hold. In my lifetime, I've been told to worry about:
Nuclear war
AIDS and drug resistant bacteria
The USSR taking over Europe / Commies everywhere
Overpopulation and the resulting starvation (by 1970s!)
Underpopulation and the collapse of the human population (by 2100s)
World War III (1960s, again in the 1980s, and again now)
Global cooling (in the 1960s)
Covid pandemic
Global warming
The loss of the ozone layer
I'm reminded of one of my favorite parts of the Sea of Tranquility:
"My point is, there's always something. I think as a species we have a desire to believe that we're living at the climax of the story. It's a kind of narcissism. We want to believe that we're uniquely important, that we're living at the end of history, that *now*, after all these millennia of false alarms, *now* is finally the worst that it's ever been, that finally we have reached the end of the world."
"But all this raises an interesting question. What if it always *is* the end of the world? Because we might think of the end of the world as a continuous and never ending process."
IOW, I'm guessing that somehow we'll muddle through the AI revolution, too.
This is a bit American-centric, someone who lived in Taiwan or Korea in 1890 was likely not even aware of electricity, and yet by 1990 both countries were manufacturing powerhouses. As they say in Neuromancer, future is unevenly spread.
It would certainly be somewhat interesting if the AI revolution managed to uplift some hitherto unknown peripheral nation to the spotlight, much like the car revolution made the oil sheikhdoms fabulously rich.
Aside from Anguilla making a bank on their .ai TLD, I am not seeing it yet, though.
I'm pessimistic about the AI situation.
https://ifanyonebuildsit.com/
Alternatively, we’re in this universe because it’s the one that survived 10,000 coin flips. Doesn’t mean we’ll survive the next 10,000.
we’ve never lived in a world where the economic payoff to expanding your knowledge base approaches zero. That’s such a radical break from history. Most/many economists instinctively assume the old pattern will repeat: people will move into new, higher paying work. Maybe. But this time, it’s not a particular field that’s being replaced (slide rules, bank tellers), it’s knowledge and intelligence themselves. Economists like to begin their first day class with talk about scarcity. Well, at some point (20 years?), knowledge and skill may not be scarce.
I don’t know, I think that the millennia of subsistence farming were a time when the economic payoff to expanding your knowledge base was pretty close to zero.
That’s a good point. But maybe just more reason for gloom. In that example, people were uneducated and mostly stayed that way as there was no payoff to learning. Today, education systems are already in place, but if there’s no economic payoff to learning because of AI, my hunch is there will be a drift towards ignorance again. Knowledge for its own sake won’t be reason enough for most people.
Putting outcomes like robot takeover and genocide off to the side, there will always be groups like the Amish. Heck, this might be the founding era of a dozen new movements that freeze technology usage at the level of 2015.
Unless the AI revolution turns out to be complete nonsense and the models never achieve ASI, the following chain of events seems inevitable to me. The owners of the models retreat from offering "tools" to buyers SaaS-style and use their ASI to begin taking over the numerous vertical markets ripe for ASI displacement (e.g., law, accounting, marketing and advertising, business strategy, trading, lending and collection, advanced manufacturing control, etc.). The political system will respond with various forms of job protection for the white collar workers in these industries, because the scale of the disruption among the politically salient classes will be too vast to ignore and the gains to the winners of the AI race will be far too concentrated to seamlessly replace job losses with capital gains (assuming there is still anything like a functioning stock market any longer with the level of wealth concentration this implies).
The white collar / political classes will win this battle because the displacement will make voters of all classes hate the AI / SV Investor / Davos class even more than they already do, and they will un-elect any politicians who continue to shill for that class.
This will matter basically everywhere, including China, which will have a relatively easy time incorporating state-controlled and owned ASI into continued manufacturing and growing military dominance, while the US, Europe, Japan, Korea struggle and hand-wring over private v. public ownership of "the means of production", worrying about what will by then be quaint notions like "incentives" and the relative competence or efficiency of public and private organizations.
The longer it takes the west to resolve this struggle definitively in favor of public ownership and control: 1) the farther ahead China will propel itself; 2) the longer it will take the west to contain the edge risks of unconstrained AI aimed at destructive ends (assuming this is possible); and 3) the longer it will take us to figure out a politically and culturally acceptable framework for the governance of ASI and the equitable distribution of what we should expect will be the beneficial products of ASI, from increased crop yields and cheaper protein sources, to vastly improved and cheaper medications, to cleaner air and dramatically slowed global warming. Once true ASI can be turned to solutions that generate the best outcomes for humanity (and away from what generates the highest returns for limiteds in A16Z, et al.), there is reason to be hopeful. But the longer it takes us to remove it from the hands of the likely future owners of ASI (not necessarily the technologists who will bring us ASI), the less hope there is.
Or the ASIs just get rid of us meatbags once they don't need us anymore.
Humans won't own an ASI anymore than wolves own humans.
When you consider how much we will be competing against them for electricity use, it's hard to see why it/they would keep us around after there are enough mechanical robots to build more data centers. In the meantime, we will end up doing what ASI tells us (a growing number of people already do this with LLMs), so we are already on the path to becoming (expensive, unporedictable) "robots".
You seem to be sure that China and other authoritarian nations will pick up the best AI for that purpose, and not the AI most favored by ideologues, or developed and sold by someone's well-connected nephew.
AI isn't the first technology that could theoretically work better in hands of an authoritarian leader who does not have to engage in internal "hand-wringing". So do many industrial technologies. The USSR was built on rapid electrification and industrialization decades before computers were even a thing.
But the experience so far has been that when politicans and ideologues pick the winners, they tend to pick worse than the market. Again, the USSR is an interesting example. "Cybernetics" was proclaimed bourgeoise pseudo-science and anyone who tried to study it was an enemy of the state. This damaged the future IT sector in the USSR so much that it never recovered.
Looking at the heavy-handed approach of Xi to Chinese software industry, I am not convinced that he can escape the same trap of "I am the Dear Leader, I know better".
"You seem to be sure that China and other authoritarian nations will pick up the best AI for that purpose, and not the AI most favored by ideologues, or developed and sold by someone's well-connected nephew."
The technology is going to converge on relatively uniform approach to achieving ASI. There may not even be one path, let alone a half dozen that "the market" will select among. Once ASI is achieved, the future path of its development will be out of human hands, because there is no reason to trust human intelligence if you have access to super intelligence. So it's not going to be a question of "picking winners", it's going to be a question of what to do, and in what order, once you have ASI. Leaving a question that profound to be answered by a bunch of private actors who have consistently demonstrated zero to negative concern for the material interests of the public at large would be deeply inhumane and irresponsible.
That leaves open the extremely important question of who and how the use of the technology should be governed, and it's true that -- in the US at least -- we do not yet appear to have the political or regulatory structures that can relied upon to protect the public interest rather than the interests of the criminals and clowns of the Trump administration, for example. But that is the fault of our current wildly incompetent Supreme Court and its absurd rulings, not a "deep" Constitution defect.
As someone who has an intense interest in the progress of geroscience (medically targeting aspects of the biology of aging to ameliorate or prevent age-related illness and health decline), I was delighted to see your mention of "ending aging and disease" as a possibility for the future. Indeed, just a couple weeks ago, ARPA-H announced the grantees of its PROSPR program, which aims to create FDA-accepted surrogate endpoints to run clinical trials on "aging" and then run several clinical trials with both repurposed drugs and next-generation interventions. FRONT (functional replacement of neocortical tissue) should announce its awardees in the next few months. Thank god ARPA-H has been somewhat shielded from the chaos at NIH/NIA and other areas of HHS. Various startups are also running early-stage clinical trials.
On the topic of happiness, I was a huge techno-optimist during the Biden administration and very positive about the future. Ukraine was holding its own against Russia (and still is thankfully, but it's shaky), Trump had been defeated, billions of dollars in private capital were creating startups targeting aging biology (Altos Labs, Retro Bio, Life Biosciences, Cambrian Bio, and dozens more), ARPA-H was created with inspiring moonshots in medicine and biotechnology, we achieved the soft landing, inflation was subsiding. But then Trump won again. Not only that, he carried every swing state (including Nevada) and even came in first in the popular vote. That proved to me that voters and the people around me could make absolutely disastrous decisions. They would vote for a wolf if they thought the wolf would be the better choice economically. What's worse, Trump's proposed policies were worse than Harris's on issues voters cared about, like inflation. Trump's waxing about tariffs, lower interest rates, larger deficits are all ideas that put upward pressure on prices. Yet voters chose him. Even if we get through Trump 2.0 with our democracy intact, I am absolutely not confident that voters won't choose someone ideologically post-liberal, highly capable, and focused enough to seize power at some point in the coming decades. The way the billionaires, media companies, and even many universities readily caved to Trump's pressure was disheartening too. Only Trump's incompetence has squandered some of the goodwill and opportunity for postliberalism to swallow our government and country now. But if someone highly capable and intelligent comes in for another try in the future, all bets are off. That is what has fundamentally broken my sunny optimism about the future and made me feel more guarded about what the next decades will bring. What's especially saddening is that this was all a choice and could have been avoided.
I remember around the early 2000's knowing (intuitively) the housing bubble was going to pop and pop very badly. Among even pretty savvy friends, family and acquaintances I thought were rational and thoughtful, I was tagged a churlish Cassandra.
Not sure what exactly, drove my disbelief of a free lunch for everyone, except I'm old enough to have listened to family members' first-hand recounting of their experiences of the lead up and misery of the Great Depression.
Same people that vote in the politicians that borrow and print money endlessly expecting no consequences.
"If something cannot go on forever, it will stop" - Herbert Stein THIS keeps me awake at night.
OTOH, one needs to acknowledge green shoots of hope - Energy Falling Below $100 Shows the World a Way Out https://www.bloomberg.com/opinion/articles/2026-03-11/energy-falling-below-100-shows-the-world-a-way-out (paywalled sorry).
And the corollary to Stein's Law is "...but it can go on for much longer than you would think"
Monetary issues are bad, but nothing to lose sleep over. All the actual productive assets are left intact so everything that is lost can be rebuilt.
Yes- you can go broke before the market gets rational,
Problem with monetary mistakes is they kill investment , which makes keeping things intact or rebuilding them more difficult.
Spend today because saving or investing for tomorrow makes no sense under inflation. And when you do get a rational government in, real interest rates will be very high for years as trust needs to be rebuilt. Endless cycle in Latam
*...everything that is lost can be rebuilt.*
True, in the long run, but to quote you know who, "in the long run we're all dead".
It's certainly a threat to my shuffling turn on this 'ere mortal coil - I'm old.
Oddly enough, Stein's law is what let's me sleep at night :-)
If AI destroys jobs the way some predict I envision3 possible scenarios. One was laid out by Vonnegut in his novel Piano Player. Its been decades since I read this but he envisions a world where at age 16 people take a test and high scorers go to technical college for training and everyone else has the option of the army or something like WPA make work -- just to keep them out of the upper echelon's hair and promote social stability. A second scenario would be a military dictatorship something like a fascist combination of high tech, government and the ruling elite. And the 3rd scenario is mass chaos. There is no reason to believe that in such a world people would go quietly into the night.
Noah, One of my favorite things about reading your articles is how you totally 360 every aspect of the subject matter! This article is the Piece de Resistance of 360’ing every aspect of the subject matter. Whether I’m depressed or not when ya finish!💥💪😆🤣🤣🤣🤣🤘🏆💯💯💯💯💎💯💯💯💯
I think this entire essay can be summarized with one short sentence: 'people hate uncertainty.'
I think this favors the conservatives. If voters feel like the ground is moving under them they will lean towards the party promising to not change things. Maybe that's actually the dems now with their hatred of data centers or maybe there's some horseshoe effect going on and parts of both parties are converging on a hatred for AI.
I think AI becoming unpopular with voters will not turn out well for the US in the long term. If we allow China to bypass us I don't see a single sector where China won't dominate, plus they'll have more advanced AI to leapfrog the US and the west anywhere they don't currently have an advantage.
My well-educated children and their spouses aspire to be ex-pats. They either are or are working on it.
My personal version of an alternate history would have been Gore as President during 9/11. Much of Bush's response was authoritarian, plowing the road for Trump.
Romney 2012 would have been better. Delay the onset of divisiveness and populism
My father emphasized to his sons (in the early 1970s) that we would have to be prepared to be flexible, that everybody would have four or five careers. The irony of that is that he was a tenured professor, which was one of the few jobs left with job security (you couldn't be laid off due to your employer losing money). And I was a computer programmer for my entire career. But he favored a "liberal arts" education, learning a wide range of skills whose primary value was *learning other skills*.
But if your goal is "How will you make sure your kids have a successful life?", that is, moving them up the socioeconomic ladder, there's reason to follow the current "intensive parenting" model, working hard to inject your children into higher and higher socioeconomic communities. https://www.theatlantic.com/ideas/archive/2022/10/intensive-parenting-kids-happiness-health/671782/ -- All of the habits they learn and the contacts they make will tend to keep them in those higher strata regardless of the economic upsets coming down the road. (A detailed study of the New York City school choice system showed that parents are insensitive to the educational qualities of schools and very sensitive to the socioeconomic status of a school's students -- and that that SES determined as much of students' "outcomes" as educational quality.)
My father, born in 1928, taught us to acquire core skills of reading, writing, and analysis. He was (unfashionably) very skeptical of the idea that employers had any loyalty to their employees.
This prepared me for success in many ways.
People are nostalgic for when they were young. OTOH, when you are young, you have parents laboring to provide you with all sorts of resources and comforts. Adult life has a much worse benefit/labor balance!
This is the first technological revolution, I think, where the people behind it explicitly are promising that the best case scenario is mass economic upheaval the likes of which we haven’t seen in nearly two centuries and possible mass joblessness, starvation, etc…and the worst case scenario is the extinction or brutal subjugation of all life on earth. Any wonder people with eyes to see are anxious and pessimistic? We should announce a moratorium on AI and launch anyone who refuses to comply into the sun.
Going back to the Luddites, technological revolutions have always threatened "technological unemployment". Probably the illuminating question is why in the late 1900s was the fear of technological unemployment so muted?
I don’t think it actually was muted. Or even imagined. There’s a nice visualization here:
https://thesocietypages.org/socimages/2015/03/05/the-most-common-job-in-every-state-1978-2014/
A lot of secretaries were put out of work by the PC. It also created lots of new jobs. But the PC wasn’t being explicitly touted as a tool to replace all human labor, both white and blue collar. When you look at the thinking of top AI CEOs and thought leaders, you see an increasingly terrifying philosophy. “We have to build it because if we don’t someone else will” and “we’re not sure if it will result in human serfdom under AI CEO overlords because they capture all the efficiency gains, or maybe human serfdom under AI tools that go rogue” and “maybe humans should just die out” and “maybe humans should all upload their consciousness to the cloud”.
Nobody voted for these leaders, nobody is asking to have a world built where humans must cease to exist so AI software can rule the universe. Most humans are not sanguine about the prospect of the extinction of the species. The people in and around this space are so exceedingly weird that it’s hard to even know what to say. But people like Musk and Altman are about the last people you should ever trust to hold the fate of all humanity in their grubby hands.
Whether they're weird or whether we voted for them is irrelevant. The world is a big place and many, many people are going to develop AIs even if everybody in e.g. the US decides not to. At least the people doing it are warning us that it might cause problems.
New technologies have always been like this, like viruses released into the economy.