162 Comments

I think there are two very different definitions of techno-optimism you and Marc are using. You're talking about the need to invest in new technologies and that when humanity has problems, the only way forward is new technology and innovation, and that all of our society should be focused on creation and exploration, not stagnation. Okay, I'm on board with that. Just tell me what to Venmo you for the ticket for that passage. You definitely have a much healthier, more inclusive vision.

However, that's not at all the techno-optimism of Silicon Valley which is being ridiculed by those of us in the technical field, or the tech press familiar with people like Andreessen and Musk. In their mind, a perfect world is ran by technology they own, and which is maintained by their indentured servants.

For example, on the one hand, Musk advocated settling Mars. It would be very difficult but an interesting, and long term, beneficial project. But he callously and casually says "yeah, a lot of people will die" and wants to offer loans for the flight to the red planet on his BFRs, loans which you'd pay off working for him. His version of techno-optimism is that technology will make life better. For him and his friends. If it just so happens to benefit the peasants too, cool beans. Meanwhile he's off ot promote neo-Nazi blue checks and tweet right wing outrage bait and conspiracy theories.

Likewise, Andreessen's worship of AI is rooted in Singularitarianism and Longtermism, he just hides it behind the "e/acc" banner in the same way Scientology hides Xenu behind offering to help with time management skills and annoying tics. He demands to have absolutely no controls, safeguards, or discussions against AI because he thinks that AI emulated how humans think (it does not), can be infinitely advanced (it cannot), and will at some magical point reach parity with humans in every single possible dimension (for stupid reasons involving a guy named Nick Bostrom), and he will then be able to upload his mind to a computer and live forever as digital oligarch (also for stupid reasons, these ones involving a guy named Ray Kurzweil).

If we say "wait, hold on, how can we train AI to benefit the world better," or "let's figure out IP laws around training vast AI models," he has a conniption because according to e/acc tenets, we are violating the march of progress because any interference with technology today might mean that a critical new technology or model isn't invented in the future, like the evil Butterfly Effect. That's the crux of e/acc in the end. The future matters. So much so that it's okay to sacrifice today and rely on humanity pulling itself from the brink an apocalyptic crisis even though it's stupid, expensive, and will cost many lives.

Finally, I feel like I have to point out that exponentially growing GDPs are great, but vast amounts of that wealth have ended up in very few hands. Unless people are going to be able to fully participate in the future and benefit from technology to make their lives legitimately easier, instead of working two jobs and a side hustle to maybe barely afford rent while buying is a deranged fantasy they don't even allow themselves to entertain anymore... Well, I've seen those movies. They don't end well.

But Marc is insisting that GDP growth can be infinite and so are the benefits, so we better get on his path of infinite trickle-down or he'll keep raging that the poors are spoiling his dream future of digital godhood because Singularitarians believe that technological advancement is the only thing that matters because it's exponential, and now also think that exponential development curve is tied to GDP. It's those broken, vicious, and inhuman ideas, along with his entitled self-righteous tone that are being picked apart and ridiculed. If he was genuinely interested in uplifting humanity instead of having a tantrum after a crypto blowout, we would take him more seriously.

Meanwhile, I read Andreessen's screed and am imagining us following down his path into the cramped, sweaty, natalist fleshpit of The World Within.

Expand full comment

> that's not at all the techno-optimism of Silicon Valley which is being ridiculed by those of us in the technical field

Who are you to speak for the entire "technical field"? I'm in the "technical field" and thought his manifesto was great, as did many other people I know. The tech giants of today were originally created by libertarian-minded techno optimists, so this is hardly a surprise.

> [Musks] version of techno-optimism is that technology will make life better. For him and his friends.

No, I don't think it is. His motivations for going to Mars seem to have changed over time. Part of it is clearly just needing big audacious goals, but originally he talked about wanting to die of old age on Mars himself. He really seemed to mean it too, there's no specific commercial reason to invest so much in Mars-related tech otherwise. But, age changes people. Now he realizes that too many people depend on him here on Earth, and he keeps racking up more and more children. Going to Mars would have seemed romantic and bold when he was young, now it seems irresponsible, no different to the way huge risks are usually taken by the young who have nothing to lose.

BTW you also lost me at "neo-Nazi blue checks". Nobody who uses that sort of language is thinking for themselves, just repeating memes in the hope of staying in-tribe. The "ridicule" you claim Marc is getting is all just leftists arguing with straw men and doesn't land, because it's so hard to dispute what he's saying via good faith debate.

Expand full comment

You lost me with the "neo-Nazi blue checks".

Expand full comment

Why is that? It’s a documented fact. Out and proud Nazis got blue checks on Twitter after Musk took over. https://www.vice.com/en/article/5d3495/twitter-blue-checks-neo-nazis

Expand full comment

The verifications were removed, so yes, you still lost me on the "neo-Nazi blue checks".

Expand full comment

Yes, they were removed after stories were ran about it. They were then purchased by an account which refers to dead Hamas terrorists as “martyrs,” which was highly recommended by Musk. https://www.telegraph.co.uk/business/2023/10/09/elon-musk-anti-semitism-twitter-israel-hamas-war/

There is also a QAnon account which posted horrific child porn with no real consequences and was paid five figures by Musk. https://www.forbes.com/sites/conormurray/2023/07/27/twitter-suspends-then-unsuspends-popular-right-wing-user-who-tweeted-image-of-child-sexual-abuse/

Are all these well documented problems just too uncomfortable for you to acknowledge so you’re going to complain about me pointing them out when talking about how technology is currently being misused?

Expand full comment

Nope. All I see here is an irrational hatred of Elon Musk. You still lost me at the "neo-Nazi blue checks"

Expand full comment

Irrational hatred would mean that I couldn't point to a single bad thing he did, just argue about vibes. I'm literally throwing out examples of bad things he did and enabled which makes me not like him, and you're very deliberately and obviously trying to ignore them or sweep them under the rug. One of us is being irrational, and it's not me.

Expand full comment

Greg is posting links to multiple examples of his point, and you're just ignoring them. Obviously, you are the one who irrationally loves Musk for whatever reason.

For me, Musk's anti-Ukraine/pro-Russia stance is probably the best reason to dislike him. Not only does he promote Russia propaganda, but he even interfered with a Ukrainian operation, just because he somehow got the idea that if Ukraine attacked Crimea, Putin would respond with nukes. Of course, since then, Ukraine has attacked Crimea several times.

Expand full comment

X (formerly known as Twitter) is going to be a fascinating business school case 10-20 years from now. Twitter was a very cool concept that became massively used but struggled to generate significant profits. There was a lot of effort to make it reasonably accurate and relevant. I think the stock market gave it much too high a valuation because its formula is very difficult to monetize and generate the massive revenue needed to justify its valuation. I always regarded it as more like Wikipedia than Facebook.

Musk bought it and largely dismantled its efforts to verify and validate content and posters. I think it is now on borrowed time relying on its past glory and inertia of people using it. That will fade as it becomes an unmoderated chat room that the serious people stay away from. Libertarian content moderation becomes a form of Gresham's Law where the trolls take over. I would be surprised if it is worth a couple of billion a couple of years from now given its current trajectory. That would mean Musk would need to buy out the lenders in order to keep the lights on. He would probably need to sell Tesla shares to make that happen.

Expand full comment

The structure of your rhetoric is curious to me. Your Substack makes it clear that you aren't particularly opposed to the march towards a cyborg civilization. Your beef with Andreessen comes across as political infighting among the engineers of posthumanity - his gung-ho technolibertarianism versus your cynical technoprogressivism. The ideology of a boss, versus the ideology of a worker, perhaps?

In any case, what interests me is that, in drawing a line between yourself and Andreessen, you feel a need to deny that AI is anywhere near surpassing humanity, while claiming that he wants to ride his virtuous cycle of technology, optimism, markets, and classical liberalism, all the way to uploaded digital godhood. It's curious to me that you can write about global warming rendering half the earth uninhabitable by the end of the century, or about retiring to live on Mars as a centenarian cyborg, but you think AI is going to just hover for decades at the level of ChatGPT.

Expand full comment

> Your Substack makes it clear that you aren't particularly opposed to the march towards a cyborg civilization.

Not at all. I don't think it will be a smooth transition, I don't think it will be a utopia when it's all said and done, and I do think it will take a very long time and will need to be done with a great deal of yes, trust and safety. But I do think it's more or less inevitable with what we ultimately want to accomplish, and that the road to it will significantly help the average, typical human do something more meaningful.

> The ideology of a boss, versus the ideology of a worker, perhaps?

No, the ideology of an oligarch who doesn't care what happens to the little people vs. the concern of someone who doesn't want to make tools that I know for a fact will cause a lot more problems than they will solve, or may not be possible to create just because said oligarch commanded it to be so.

Consider that in practice, Marc's bloviating ends up the form of products people don't need or want, and which doesn't help them solve any real day to day concerns. Just look at the dude's track record. Bored ape NFTs and places for angry people to yell at each other for clout aren't exactly the next penicillin or gene editing treatment. And not to brag, but by contrast, my work has been in logistics, telecom systems, financial systems, and fraud investigation.

> you feel a need to deny that AI is anywhere near surpassing humanity

Because it's not. I work with it. It's really freaking dumb under the hood, and it takes literally millions of hours of work from poorly paid freelancers all over the world to keep what we have going by fixing its training sets between iterations. It's only good at the exact, specific problem space for which you train it and literally anything different requires a profoundly different architecture. At best, it's a handy assistant for those specific tasks you need. Trust me, I would love for it to be better and smarter. It would have saved me two days of headaches last week alone.

> while claiming that he wants to ride his virtuous cycle of technology, optimism, markets, and classical liberalism, all the way to uploaded digital godhood

Yes, because he is delusional, as I clearly indicated. He has that dream but the odds of his achieving it are on par with both us winning the PowerBall jackpot twice in the same week.

> but you think AI is going to just hover for decades at the level of ChatGPT

It won't. It will absolutely improve. But that means it will be a much better assistant that will require far less re-training and hands-on time for humans to stay accurate. It won't mean that it will be smarter than humans because the leap from an n-dimensional probability matrix of words and their sequencing in a sentence to human intelligence is like expecting your oven to also do your taxes and represent you in court.

You would need countless models for countless tasks working together, and it's far from a settled question that this is possible in a flexible, efficient way, or even desirable past a certain context. I have actually worked on parts of this problem firsthand.

> The structure of your rhetoric is curious to me.

Just returning back to this to try and clear up the possible confusion. The stereotype of a techie is that we think everything is possible and "MOAR CODE!!!" is the answer to just about everything. But having been around the block a few times after spending two thirds of my life working with technology, I've learned that this is an approach which hits points of diminishing returns pretty quickly. I'm not a "cynical technoprogressive" but a skeptical technologist.

I think technology is the ultimate solution, but it has to be genuinely helpful and that when we create it, our expectations and goals should be ambitious but realistic and genuinely human-centric. Having grown up in the Soviet Union and the oligarch fueled chaos that followed, I've seen exactly how unsustainable and dangerous it is to ignore the needs of millions for the enrichment and aggrandizement of a few powerful people with a God complex.

Expand full comment

The discussion here moved on long ago, but I have been wanting to come back and make a few comments. I am going to put aside the debate about the relative merits and demerits of your philosophy and Andreassen's philosophy, just to focus on what I believe to be the most important thing, the time to superhuman AI. I think you, and maybe Andreassen too, are radically underestimating the kind of advance that large language models represent. Through blind luck, the human race has hit upon a low-effort formula for creating general-purpose AI; we're energetically tinkering with it to make it more powerful, like a bunch of apes from "2001"; and the logical extrapolation is that a few years from now, we won't be in charge any more. I don't know what kind of engineering or problem-solving you were trying to do last month, but I just don't see what kind of human abilities are still out of reach. I can already talk to Bing about anything from Derrida to Palestinian politics to civil engineering, and it is an intelligent interlocutor. Make it just a little more intelligent, and it won't need me at all.

Expand full comment

I’m going to stop you right there. LLMs are not general purpose machines and never will be. They’re specifically designed and customized for handling language with context cues and atokenizers that generate text outputs. They’re like a much more advanced talking toy. Try to use LLM architecture for math and they’ll crash and burn right away because ANNs are a much better fit for mathematics. It’s already been tried. https://spectrum.ieee.org/large-language-models-math

Expand full comment

That link is a year old. The biggest LLMs are now better at math than most human beings. Empower them with APIs and the sky is the limit: they can have memory, reflect on their own output, modify themselves. Maybe some other architecture (Gemini?) will overtake them, but either way, time looks short.

I actually don't understand what you're waiting for. What capability in an AI would make you sit up and go, hey, things are much further along than I realized?

Expand full comment

> The biggest LLMs are now better at math than most human beings

You just made that up out of thin air. Mathematics literally forbids this because probability matrices are only useful for specific statistical problems requiring probability matrices, and for outputs of solutions, atokenizers are actually a hinderance. We already have amazing math AIs based on an architecture specific and optimal for them, just like we have models for object detection based on a third, very different architecture.

> they can have memory

So do pocket calculators.

> reflect on their own output

They treat questions about their own output as a prompt, yes. They're not sitting there going "why do I think that?" but more like "hmm, I need to try and elaborate on this because that's what my activation prompt says." Which is fine. Just don't confuse it with motivated introspection.

> modify themselves

Yes, we told them to do that by creating recursive training loops that incorporate user input and reactions in further training.

> What capability in an AI would make you sit up and go, hey, things are much further along than I realized?

I'm glad you're thrilled with the equivalents of advertising brochures, but you're not the one who needs to train, debug, retrain, and update these models day in, day out. It's like the skit about a man who tells his overwhelmed wife complaining that she comes home after work and spend all night picking up after him that he solved the problem of cleaning the house because "everything I leave on this coffee table just vanishes overnight, like magic!"

You don't see the millions of lines of code, the thousands of coders, the millions of overseas contractors constantly correcting and re-training AIs, or the experts struggling to overcome technical plateaus. You play with the experience they specifically trained, streamlined, and debugged for naive consumers, spending tens of millions to keep it running smoothly, and going "Wow! This thing is amazing!"

And I also don't understand your worship for LLMs. Just because they can "talk" to you, you've crowned them The Way and The Light of all AI, which is just nonsense. AI comes in many forms and approaches for many different tasks. We have RNNs for prompt analysis and time series predictions. We have ANNs for math, CNNs for image recognition, perceptrons for statistics, GANs for art and image generation, and yes, LLMs for language.

The trick is to combine all their strengths into something greater than the sum of its parts. That's how your brain works., If it's good enough for your brain to have specialized cortices that talk to each other, why is that not good enough for AI that you have to insist a model you really like has achieved magical breakthroughs in a year based on the very authoritative source of... you said so? If you have taken the track of how to combine these different models into something new -- something I've actually experimented with and researched -- then we'd be having a different conversation.

Why do you need AI to be this magical black box that will overtake humanity so badly that you refuse to listen to people who actually work with it, say that's not the intent, and probably not possible, and ignore all their points, reasons, and actual code they've written? Do you really think you know more than comp sci experts because you played with ChatGPT a few times and thought "boy, this thing sure is nifty?!"

Expand full comment
Comment removed
Expand full comment

Aren't they? The third (I think?) plot shows how GDP per capita has increased worldwide. Other parts of the world aren't nearly as well of as us, but they're a lot better off than they were fifty or a hundred or two hundred years ago.

Even the proverbial "starving children in Africa" barely exist any more.

Expand full comment

That’s linearizing, which is not a great measure for near-chaotic systems. See Scale by Geoffrey West.

Expand full comment

Probably because the choices weren't really for us, they were thrown at us ignoring the Paradox of Choice. Give us seven or ten options? We can be confidently happy about the decisions we make. Give us 500 options? We'll always second guess and struggle to make an informed choice, especially when many of the options are very similar.

Expand full comment

Of course they are. Go to any poor country and marvel at the prevalence of smartphones or satellite TVs.

Expand full comment

I'm totally on board with techno-optimism. Capitalism and technological progress have been the two biggest drivers of a better standard of life for humanity (although I have to note that capitalism is pretty bad at distributing the gains).

As a society, it seems like we're still pretty immature in our thinking about the risks and externalities of new technology. In my opinion, too many people are either going full booster like Andreessen and ignoring any downsides, or full naysayer and making claims like a modest amount of misinformation on social media being one of society's major ills. People seem to have the hardest time taking costs and benefits into account. We also seem to have lost sight of material progress in particular. The most important thing we can do for people is to make sure they have food, housing, healthcare, and so on. But we seem to keep taking our eye off the ball and worrying about things like relative status.

Expand full comment

“Marc lists various “enemies” — not people, but ideas and institutions that he thinks restrict the growth of technology.”

Andreessen is just another whiney tech-bro. Apparently, he forgets taxpayers subsidized most of his education at a State Land Grant University. If he’s not baiting-and-switching naive investors with shitcoins. He’s far from a candidate for canonization. In fact, there are people much smarter and thoughtful than Andreessen that have worked for institutions such as DARPA whose names will never know. These are people who easily could have gone into private enterprise and earned much more. Personally, I’m weary of the worship of the tech-bro culture. I, too, an a techno-optimist, but I see Andreessen, Musk, Dorsey, et alia who are self-referential and give themselves far more credit than they deserve. Being a tech billionaire doesn’t give one license to expound as an expert on everything under the sun. The world and universe are much bigger and more important than the insulated billionaires of Silicon Valley.

Expand full comment

Whatever taxpayer’s subsidies his education supposedly used have been payed back million times over by the taxes he and his companies have paid. More than can be said for all the whiney ex-students who want Biden to bail them out of their unpaid loans after having squandered their education to not even make enough to pay their own loans let alone income taxes.

Expand full comment

I had more fun looking at the clip art you chose for today's post than I had trying to even read one paragraph of Andreesen's screed. I fall in the Ed Zitron camp (see this week's Substack where he skewers Andreesen) that bloviating posts are just not worth my time these days. Clearly anyone can write on the Internet and even maintain a website (I've done both over the years) but they have to be judged by the content and in this case there is nothing there other than some Randian platitudes and general complaints. Folks should really pay more attention to Molly White if they really want to know where the future of Web3 and assorted other scams.

Expand full comment
author

I don't know, I love bloviating, so... 😅

Expand full comment

Emi Kusano and Marc Andreesen are both great. I agree with Molly White that Web 3.0 and NFTs are bullshit, but Ed Zitron is a PR hack for trash rag Business Insider who delivers zero value add.

Expand full comment
Oct 21, 2023·edited Oct 21, 2023Liked by Noah Smith

So happy you inserted that paragraph about animals. Until I reached it I was getting worried that my brain was going to get stuck in an endless loop: “What about animals? What about animals? What about animals…”

Also, why are we so cheap and restrictive on medical research? There are a lot of medical issues to improve and resolve. Like only now has the US government started to get serious about osteoarthritis https://www.businesswire.com/news/home/20230626928821/en/AngryArthritis-Founder-and-Osteoarthritis-Patient-Steve-O%E2%80%99Keeffe-Applauds-ARPA-H-Moonshot-to-Find-a-Cure-as-He-Works-to-Eliminate-Joint-Replacements

There should be an international treaty on medical research that pushes governments to spend more. The mRNA tech shows what can be done even in a short period of time when you have money and focus.

Expand full comment

There should be an international treaty that allows drugs and procedures approved in any OECD nation to be allowed in all other nations, this would eliminate many lengthy and expensive redundant trials and make way for trials of more new drugs.

Expand full comment

This could be a disaster or a great advacement. How many medications or procedures are no longer used because they cause more harm than good? Individual countries approving delays improvements in life but also sometimes delays harm and wasted money.

Expand full comment
Oct 21, 2023Liked by Noah Smith

The uterine replicator seems just around the corner, technologically. Birth rates are gonna get real weird when rich people in developed countries can decant platoons of babies.

Expand full comment
Oct 21, 2023·edited Oct 21, 2023

I don't think artificial uteruses are the limiting factor in rich-person reproduction, at all. Upper-class and upper-middle-class people do not put social value or status on having lots of kids; if anything having a lot of kids is low status. Then there's the cost of having all those kids, which can be prohibitive when the parents are trying to give each one a maximal shot at the upper class. EDIT: I only know about US culture and can't speak for elsewhere.

Expand full comment

This correlation doesn't seem to be holding any longer. High status, high income families are starting to show higher fertility than middle income families in the US. Here's one reference, although I'm not sure it's the best one. I think Noah or maybe Matt Yglesias has written about this before. https://www.imf.org/en/Publications/fandd/issues/Series/Analytical-Series/new-economics-of-fertility-doepke-hannusch-kindermann-tertilt

Expand full comment

there's also no point in decanting platoons of babies if you have no platoons of parents to love and care for each one of them.

Expand full comment

Platoons of childless lower class adults forced by economic circumstance into being full-time hired caregivers?

Expand full comment

Oh there's an idea. Can't buy love though.

Expand full comment

Just around the corner as in 20-30 years from now (which we know sometimes means much longer)? Since it's a medical technology (meaning FDA approval), high stakes, and borderline taboo, I don't see how this would get adopted fast. What do you think? Am I thinking about it wrong?

Expand full comment

More like 200-500 years. This technology would be so hard to build, and currently has so little demand, that we shouldn't expect it for a very long time.

Expand full comment

It's not surprising that there's no demand for a product that doesn't exist. Since having a child involves a lot of pain, inconvenience, and even a risk of death, I think there would be a lot of demand.

Expand full comment

You definitely might be right that it would be popular if it existed. It often works out that way.

But lots of technologies were in huge demand before they existed. We've sought cures for most diseases for as long as we've known they existed, and the drugs, vaccines, etc that we now use against them were often invented hundreds of years later.

So that tangible current demand is what I'm saying is missing.

In addition to that I don't believe the tech is even nearing feasibility.

In general, if a medical tech is just around the corner, there is a company with at least $20M in funding and at least $200M of biobucks working toward clinical trials.

Expand full comment

I appreciate the "sustainability is technology" call-out and I think it gets to a simple and much deeper disagreement between perhaps Marc and most of the people poo-pooing the manifesto. Everyone has a different definition of technology.

Let's take another "enemy" Marc calls out -- trust and safety. That's technology! In the social media context, it's a solution to "how do you scale conversation". It sometimes has the imprimatur of government, but it's largely a free market response. Meta has paid moderators. Reddit has volunteers and more granular communities. X is doing a free for all but you have to pay money. BlueSky and the Fediverse are trying more distributed approaches. Some will fail. Some will find product market fit. Some are more organizational or business methods than hard tech. Some are clever techniques like shadow banning. And some are actual hard tech than using machine learning.

Marc doesn't think that's real tech because he doesn't think trust and safety solve real problems. But if you buy that things like disinfo, polarization, harassment, spam, and social media induced depression are both real and bad, then attempts to solve this are as real a "technology" as attempts to cure disease or male pattern baldness or whatever else bothers people these days.

Expand full comment
Oct 21, 2023Liked by Noah Smith

This.

Effective acceleration of technology. Particularly AI.

Don't be a decel.

Expand full comment
Oct 20, 2023Liked by Noah Smith

What metric would you propose for judging whether innovation is accelerating or decelerating?

In other words, how can I even tell whether the optimists or the pessimists are correct about the rate at which fruit (low-hanging or not) is being picked?

Expand full comment
author

Well, TFP is one metric, though it's also affected by other things besides technology.

Expand full comment
Oct 21, 2023Liked by Noah Smith

Yeah, and it seems like it would be noisy, since it's obtained, I think, by subtracting other data. Also the only TFP data I've seen that goes back to the 19th century is in Gordon's book, and his 50-year bins make me suspicious—even though I find his qualitative narrative pretty convincing. Your narrative sounds convincing too, but I can't tell whether you actually disagree with Gordon on any testable claim. Do you?

Expand full comment

But TFP is the best objective measure I know of. Life expectancy seems like another good one. Gordon's idea is to pick out some technologies and decide he's more impressed by the toilet than Facebook or whatever. But those things aren't objectively comparable, nor can he even begin to make a complete list.

Expand full comment

Well, to give Gordon's argument a real world example, indoor plumbing and water treatment eliminated cholera and typhoid fever from killing people, and increased life expectancy. I don't think Facebook is comparable in that way.

Expand full comment

Life expectancy is definitely a good measure of progress, which is why I mentioned it above. A measure of overall burden of disease is also good. Same with labor saved, median $ consumed, or GDP.

If Gordon mentioned things like the toilet merely as examples of these phenomena, I'd be with him. His book predates the episode where covid vaccines saves millions of lives, but now we have it as an example too.

But Gordon seems also to be saying that modern technologies are less impressive than older ones. Now, to me, covid vaccines are more impressive than the toilet, and I say that partly because such immense modern skill was needed to create them. But I don't think the two can really be compared in their impressiveness, because they are unlike each other, despite both having health benefits.

Nor do I even think we have a good definition of what we think makes technologies impressive.

Certainly fun to read about each one in detail, though.

Expand full comment

I always thought the argument was that technologies like vaccines or inoculation, which go all the way back to Edward Jenner's experiments with cowpox, which ultimately resulted in the world's first vaccine, and eventually resulted in the elimination of smallpox, are less important to humanity than social media or email. Now, the question is what defines importance?? Here, it seems it'd be defined as things that would be an inconvenience to live without, versus things that would be fatal to live without. People could live without cellphones, or email, but they'd die without radiation therapy, antibiotics, or clean drinking water.

Expand full comment

I think it needs to be much more fundamental I.e., Energy production etc.

I think the follow are good metrics b/c there’s a component of time so you can measure whether you’re successful or not. My top contenders are

1. Henry Adams Curve or 2% CAGR

2. Kurzweil curve

3. Birth rate >2.5

4. Lifespan, 1-2% CAGR

5. Human freedom index (or something like it)

Expand full comment

Good article overall - BUT the key thing missing in my opinion is any discussion of the negative externalities of techno-optimism. Yes, with the "right policies", that is taken care of - but realistically in our world, that simply doesn't happen. Some technologies make only minimal sense given the externalities.

Noah's seed example is a good one - the externality of depleting the ability to grow more trees is critical - in this case, his example is a small group where that effect is felt by the people who make the technology change - But in the real world, the externality is often an effect on others (pollution near industrial facilities, etc, etc and of course warming). So, the feedback loop is missing.

The hard problem isn't whether technology can advance or even be generally useful, but how to craft a system where those externalities fit into the equation of what is useful, feasible, etc.... Technology is likely to keep advancing no matter what (positive), but "normative" only happens to the overall population if the externalities really are dealt with.

Expand full comment

Externalities are rarely ignored. Even in the silly seed example, I can’t imagine not a single person would object to the plan of eating all the seed corn. Civilization would have ended long ago if everyone was that blindly foolish. Pollution is way down from 50 years ago (go check out LA), and even more down from 150 years ago. All the solar, wind and EVs are in response to rising CO2 levels, or do you think it’s simply not happening?

Expand full comment

Actually, externalities are still ignored frequently. Lots of folks object to their externality not being paid attention to - and it can take a lot of people together to actually make something happen (Noah's "right policies"). Pollution is down because a ton of people demanded it eventually. But there are still bad sources of pollution being built all the time (especially in poorer neighborhoods where there is less power to affect it).

So, yes, when enough people object, the externality can be dealt with - but it isn't inherent in society or the system. Which is my point. Normative "techno-optimism" in its naive form assumes that technology causes an overall society benefit - but that benefit is often at the cost to portions of the society that have less political say. And many technologists ignore that part of the equation since it doesn't affect them directly.

Expand full comment

The free market provides incentives to privatize gains and socialize losses, or in this case to drive “positive internalities” at the expense of negative externalities. Without regulation this is an unstable system, leading to excessive swings and ultimately violent uprising and revolution. The trick is to find the right balance of regulation, limiting the pollution without stifling further innovation and development. To argue for the technolibertarian extreme of no regulation is not a sustainable path and seems a bit naive.

While largely agreeing with Andreessen’s manifesto, the strict rejection of regulation and of UBI seems strange to me, given that his main argument is that technology exists for the betterment of all humans.

Expand full comment
Oct 20, 2023Liked by Noah Smith

Techno-optimism is thus much more than an argument about the institutions of today or the resource allocations of today. It’s a faith in humanity — and all sentient beings — propelling ourselves forward into the infinite tomorrow.

Wow. A Vision for Humans.

Expand full comment

Nice article for the weekend

Expand full comment

Good luck. What's missing is ecology and the interdependence of species. You've left out the rest of life on the planet and the conditions it creates that allow us to survive. Humans are not smart enough to get a techno world right. We are, in fact, messing up the future with technologies that damage ecosystems. In other words, I feel you are thinking about it wrong, leaving out the rest of creation.

Expand full comment

The planet and the rest of creation does not create conditions that allow us to survive, it doesn’t care about us or anything. Without technology we would be barely struggling to survive at a very low population level as we were thousands of years ago. And there are many more varieties and numbers of plants and animals in the world since humans started domesticating and breeding them.

Expand full comment

"And there are many more varieties and numbers of plants and animals in the world since humans started domesticating and breeding them."

I cannot believe I am responding to somebody wrong on the internet, but holy fuck bud is this bit as insane as it is ignorant.

No, absolutely there are not "many more varieties and numbers of plants and animals in the world since humans started domesticating and breeding". This is innumerate and cannot be supported by any data. For anybody who stumbles upon this comment, we are experiencing mass extinction, a massive, overwhelming reduction in the number of species that will take millions of years to recover, not an expansion: https://www.pnas.org/doi/10.1073/pnas.2306987120

Our current use of technology is anti-life, even if it is "pro-human" in a narrow, short-term sense. The absolute biomass on our planet is going DOWN, not up: https://www.nature.com/articles/s41586-020-3010-5

Again, sustainability itself is a technology. Promoting life is a technology. Choices are tech. And currently, our tech is not being used optimally. Quite literally, the planet and the rest of creation created the conditions that allowed us to begin this technological journey, they are the substrate that we build upon. And they are finite. Really worrying that this basic fact could be missed as a result of "techno-optimism".

Expand full comment
Oct 21, 2023Liked by Noah Smith

Noah, you've just made the case for teaching HumanEcology K-14!!!! Yea!!! Without first instilling the rationale for Active/Normative, the risks will be high. Human Ecology first gels the concept of healthy society and then develops it one on one so it can be experienced by choice in each student's life. Learning it early is key, so is making it experiential -- it then lasts a lifetime. When you can cook a meal responsibly and enjoy sharing it, you get the lesson. Human Ecology sets the table for a really big meal, everyone invited! If it was delivered via our public schools, both humans and animals and their ecosystems, might survive.

Expand full comment

Related:

https://www.sciencealert.com/we-may-be-witnessing-the-death-of-nature-expert-warns

I think that no matter how optimistic one is about what "could" happen, we need to recognize that there are competing processes in our world.

If the destructive processes are actually occurring at a faster pace than the positive innovations .. what role optimism?

Expand full comment

Feels like Stuart Kauffman covered a lot of this 20 years ago - and more elegantly than any of these constructs, with his theory of the adjacent possible and that you even now have Jon Henrich effectively plugging Kauffman tightly into the most comprehensive and well researched picture of western cultural evolution with WEIRD people.... Is it that people haven’t read Kauffman or Henrich? Because if technology is a function of a cultural evolutionary process that grows in complexity and richness as agents and entities multiply and accumulate and which can’t be predicted beyond the inevitability of this accumulation - then the idea of having an opinion on technology is very much like having an opinion on evolution... you can have your opinion but there is a fundamental process at work which is independent of anyone’s opinions... or something like that. You can seek to understand the process - but the whole idea of evaluating and manipulating it and having opinions about it is logically like thinking one can command the wind.

Expand full comment

Right. Even if we no longer believe in a creationist deity, we still tend to ‘see’ teleonomy in nature, talking about technology as if it is a causal entity creating this anthropocentric notion of ‘progress.’ Of course, this mental top-downism is itself a basin of attraction on the human complexity landscape, so who knows?, maybe technology is what the ancients were misinterpreting as God.

On the pessimistic side, in this evolutionary view the recent elaboration of human technology is, just as the over-expanded human population itself, a phenomenon of the dissipation of a one-off steep fossil energy gradient. Noah mentions solar and nuclear fusion in one sentence, as if technology will cross the energy problem bridge when it gets to it. Some degrowthers call this energy blindness; technological advancement disappears instantly when there is no robust energy gradient driving entropy into the adjacent possible.

Expand full comment

careful with entropy... it seems to help but is never quite what it seems. remember schrodinger had to make the caveat that for some reason entropy seemed to be locally violated by living systems.... that's one that always gets me... that and spin glasses. But one needs to be careful with some of these laws. Also -- if you dig into what petrochemicals are, their prevalence in any catch near the surface of the earths crust suggests that strangely enough they are a biological by-product of the biome in the earth's crust -- an area much larger than our surface biome and fed by geothermal energy.

when we dig into things and ask why, nothing is ever quite as it seemed at first. it ends up being surprising and strange.

so the problem with "fossil fuels" as a scientific matter isn't that we would ever run out -- the crust is huge and the biome is fed by a lot of heat and energy just like the sun feeds us a lot of heat and energy. But the strategy of taking massive amounts of carbon from the biome in the crust and releasing them into the biome on the surface of the planet isn't sustainable for the smaller biome on the surface and will cause problems if we don't figure out ways of getting the carbon back where it started.... so note the construction -- if we drill down on the science we can get to the engineering issue with simple empirical observations... but that path isn't available if one is trapped in bad assumptions and inaccurate social/political constructions of the problem as a depleting resource. Depletion isn't the problem - the transfer of carbon is... and then the thing is the only way (and a lot of the folks working on it know this) the only way to get the carbon back in the crust or at least out of the atmosphere is to collapse the price of energy and the only way to do that is with forms of nuclear.

Now, with nuclear there is a lot that's interesting -- but if we just observe that for systems experiencing exponential growth or improvement, one can dismiss worrying about the existing state of a technology and focus on the technologies with the highest potential growth curve -- because the faster growth will swamp everything else given a few years... (jim keller calls this norm joupi's law) and this is where nuclear fusion is really interesting since it appears to be establishing a more friendly and lightweight regulatory structure than fission ever had. So once has to make a leap and assume that the basic science work that the projects have done is right - that their reactors are technically possible, but they have engineering work to do... I think Avalanche Energy is a fascinating one to check out -- playing with form factor.

Anyway -- be careful with entropy and the idea of fossil fuels and a lot of things that seem to be clear or true but when you look more closely are quite complex and surprising, literally. Metaphorically, I guess a scientific principle can be assigned whatever social significance is useful for a variety of types of signalling... but then that's about signalling which is about coordinating group behavior within cooperative groups...

Overall, its fun if you get the pieces of the frameworks I am playing with and actually quite liberating.

Expand full comment

the oil as a biological by product is from Gold's Deep Hot Biosphere...

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC49434/

Also, since ideas come through language and cultural -- the idea that any idea we have is actually our own is an illusion.... when we know what we are thinking we can footnote the sources for each piece of our arguments. The argument may be new - a new recombination -- but your "new" idea will always be a recombination of ideas circulating in your culture... so everyone exposed to the ideas you are recombining could be (and inevitably often is) simultaneously recombining the same ideas. So that's how convergent discoveries happen... and why even the new idea that you thought was your own is actually probably just an obvious and inevitable recombination of the ideas floating around your cultural context at this time....

so I like to source things... none of my ideas are actually my own, and the recombinations which will be the most useful will be the ones that make sense to the most people (occurred to or almost occurred to them to).... so that you best new idea will be your least unique new idea... and your new idea which is actually the best and actually new -- well, no one else will understand it so it will probably be forgotten. that's just how it works... even dunning kruger catches on to this one... the problem at one end of dunning kruger is that you are wrong but lack the skill to notice your errors and on the other side, you develop exceptional skill but no one else has the expertise to appreciate it. Dave Dunning actually talks about the inevitability that this means that the actual genius' most often walk among us and are never understood.... sobering thought....

this is a great Dunning talk: https://www.youtube.com/watch?v=ljkCyUXXlGE&t=2523s

Expand full comment

Wow. That Gold article is super interesting. I’ve never seen that before.

RE: Dunning Kruger. I totally relate to inverse Dunning Kruger, where nobody understands my genius. (Just kidding!!)

As illustrated in his talk, Dunning is coming out of the academic landscape of abnormal psychology research. So, you have this huge academic literature concluding that human cognition is flawed:

“Experiment after experiment has convinced psychologists and philosophers that people make egregious mistakes in reasoning. And it is not just that people reason poorly, it is that they are systematically biased. The wheels of reason are off balance.” (The Enigma of Reason by Mercier and Sperber.)

Of course, from the evolutionary perspective it doesn’t make sense to describe the cognition of a highly successful cooperative species as flawed. Mercier and Sperber take the ‘interactionist’ approach over this ‘intellectualist’ approach. In this view, reasoning is first and foremost a social competence aimed at solidifying our reputation in our group. Now we find ourselves wondering how cognitive biases are adaptive?! This leads to speculation about emergent group level intelligence, ‘convergent discoveries’---- the human superorganism, if you like.

Expand full comment

I think Kauffman's argument would be that we are creative beings in a creative universe... which is arguably a much more accurate and socially constructive assumption to begin with than most people use.. .and Godel did point out that we have to start with an assumption somewhere. But technology isn't a separate causal thing -- if you get it you see technology comes with culture which comes with human evolution, so technology is a complex emergent process which is a sub-process of a sub-process of evolution... my core point is that to pretend to analyze technology without the context which comes from understanding that it is a cultural evolutionary process, seems to miss everything that is interesting and amazing about technology, and about us as humans and our superpower of cooperating and collaborating at scale.

Here's a link to a recent Kauffman TED Talk on the Adjacent Possible for anyone curious: https://www.youtube.com/watch?v=nEtATZePGmg

And it is also worth mentioning that Steven Johnson thought enough of the theory to name his substack newsletter after it:

https://adjacentpossible.substack.com/about

So there are different threads and we haven't even gotten to Henrich who really nails things down with some very impressive and detailed academic work.

But, to get back to Godel and the Incompleteness Theorem, which really just reminds us that to use ideas to get anywhere we must start with an assumption... I just think we have to be very careful what assumptions we make since they end up logically defining the solution space we see. So in an effort to be cautious with my assumptions, I anchor on someone whom I think has among the best understandings of complexity and self-organizing systems and evolution. He also at the end of the day sees something amazing, kind of like the way Feynman sees, amazing in a flower:

https://app.reduct.video/o/4fcfbbf91c/p/3cc26f9322/share/1a349afda4194c27e441/e/e28b931ddc6c/

... or Kauffman's view of diffusion driving recombination and seeing diffusion as an innately creative force.

Note -- Noah implicitly gets it, but then doesn't -- so things like that Society is technologies UX gets it, but doesn't... because the actual analytical construction could be that culture is the UX, and evolution is the OS and then you have the hardware layer, the physical things below that. There is a "software(tech-culture)- OS-kernal(evolution)-hardware(biology-chemistry-physics)" stack... or something like that. The concept of humanism is redundant once you see the stack -- the stack only applies to humans.

So my simple assumption -- that one should read Kauffman and take he and Henrich seriously... leads to a completely different view of the world. Don't know... I like to think of all of us as creative beings in a creative universe , who are just waking up to how amazing this is.

So I don't sweat techbro power moves or battles over the shape of progress and social impacts. Positive and negative externalities collapse into good and bad darwinian pre-adaptations for the next iteration of recombinations and competition / evolution. It just all looks similar but different to what I am seeing described... and just wanted to share a doorway in to this point of view.

Expand full comment

Thanks for the mention of Kauffman's work - a very underappreciated resource

Expand full comment

if we think of ourselves as creative beings in a structurally creative universe... which is Kauffmans ultimate construction in a World Beyond Physics, then tech is inevitable and i want to say i’m a techno optimist or something like that but maybe being a techno optimist is actually just being a realist who listened to or read and understood kauffman at some point in the last 30 years... and maybe rather than debating all this people should just pause a beat and go back and study their Kauffman. it is complex, but that’s complexity and that’s life and that’s our universe...;) 🙏

Expand full comment

Speaking of Kauffman, I remember a reviewer of Kauffman’s Origins of Order remarking that “you'll be so excited you'll want to rush and explain it to someone else” but you’ll simply have to “face the fact that you are now relatively alone on a higher plane.”

Expand full comment

Hi Matthew. Thanks for referencing Henrich's "The Weirdest People in the World". A great book and everyone should rush out and read it.

Expand full comment