177 Comments

Good article. One of the most aggravating things for me personally is people who look at art as political propaganda FIRST and everything else second. From the old Moral Majority of the Right to the Social Justice Warriors of the Left, these people are nothing but scolds.

Expand full comment

100%. Haprer and Stross don't even acknowledge the massive positive that SF garners by... Simply being a genre of books that hundreds of millions around the world like reading. Neither does Noah! It feels like if we're measuring the impact of a book, the number of people who read and enjoyed it should factor in somewhere, somehow.

Expand full comment

People have unrealistic expectations for literature I think, and most art forms. Sometimes I just want to read a fun book.

Expand full comment

You can see how they look at it primarily through the lenses of propaganda in how inconsistent their arguments are. "Sci-fi is not an effective means of political change...... unless the change is something I disagree with politically, in which case it is the root cause of Evil Billionaires Inc.!"

Expand full comment

It seems to me the biggest flaw in Harper's argument is that he assumes sci-fi is completely ineffective in inspiring positive change (like fixing climate change) while simultaneously assuming it is 100% effective in inspiring negative change.

I agree there's no concrete evidence that climate fiction has any real positive impact on the world. It might, but there's no way to know. But how does he then go on to claim as accepted fact that the atom bomb was directly inspired by sci-fi? If you believe the latter, then you have to at least seriously consider the former. Anything else is just blatant cherry-picking.

Expand full comment

I especially thought it was weird to attribute the atom bomb to sci fi, when as far as I know the idea came pretty directly from the Szilard-Einstein letter, rather than anything fictional.

Expand full comment

Having a literature professor talk about capitalism is about as useful as asking a bond trader his views on 19th century French novels. I always respect people willing to spread their wings as long as they keep their thoughts on their hobbies to themselves. They are not worth my time unless I am very, very good friends with them. Noah's views on science fiction are an exception.

Expand full comment

A lot of these takes are fairly popular though. As Noah points out, the take on space exploration being bad because it might be an ego trip for a billionaire is one I’ve seen all over the place and heard uttered in real life.

I don’t understand the criticism of space travel. Should we also criticize ocean travel? I guess it’s wrapped up in memories and historiography of colonialism.

Expand full comment

It's because people are answering the question: are you cheering for this outcome. They aren't cheering for the billionaire to take any kind of vacation or expenditure but those aren't things they are being asked to evaluate.

It's a problem because people confuse this question with questions about regulation.

Expand full comment

I think it’s fair to speculate on whether space travel will be useful or a waste of time. The various versions of space tourism that have been floated do all seem a bit dumb, and I also wonder whether a Mars colony remotely makes sense. But of course that’s not the same as considering it morally bad. Especially now that it’s private companies more than NASA, people are free to spend their money on dubious projects if they like.

Similarly, people do critique ocean travel, aka cruises. But similarly, to each their own. My point is that there’s nothing wrong with criticism either.

Expand full comment

Yes, of course it is fine to ask that question. Indeed, I tend to think all human space travel is a pretty silly waste of time (at least for the next 1000 years). Robots are becoming more capable far faster than we are solving the challenges of keeping people alive in space and if you want to ensure the survival of the race you'd be far better off building a nuclear powered bunker deep undersea than colonizing an almost airless and waterless world like mars.

However, it's still better if billionaires try to go to Mars than just hold crazy parties and build giant yachts. At least the former creates positive spin off tech.

And that's my point. People are answering the question of whether humanity should be cheering on the billionaire's space ego trip but they confuse that question with the question of whether it's a praiseworthy way for a rich person to entertain themselves. It's just that rockets are in the news while a billionaire dollar yacht and parties are delibrately below the radar.

IMO the right answer is to say: no it's not something the rest of us should be invested in but it's one of the better ways for rich people to entertain themselves.

Expand full comment

That's a good way to frame it - they're answering a different question.

Expand full comment

There was also the take on "For All Mankind" last season where the tech billionaire is not obviously on an ego trip when you first meet him and has done some good things but NASA has a culture of actual honor and discipline.

Expand full comment

As a side note, and maybe this is an obvious point to everyone else, but NASA is really a disappointment. It seems to me that it's fiction doing the heavy lifting of their reputation and only for nostalgia's sake. If anyone is to blame for billionaire supremacy of space it's NASA and their turn into bureaucratic obscurity.

I haven't seen For All Mankind but I know the premise is a NASA that remains front and center in politics. But even NASA in The Martian acts a lot different than NASA in real life. It reminds me of the portrayal CDC's incredible competency and ability to act in the movie Contagion from 2011, when in practice the CDC seemed to be caught flat footed by their own job in 2020.

Expand full comment

a) The premise is not only a NASA that remains front and center in politics but a NASA that is being continuously pushed by the Soviets so that the public at large continues to pay attention

b) I am so old I remember when both Voyagers launched but since then it has been a vicious cycle of NASA doing things of primary interest to space nerds (although JWST can only be called a rousing success) and NASA's budget being easy to sacrifice for more short-term problems.

Expand full comment

haha well put, "things of primary interest to space nerds." I love everything about basic research, but most people (taxpayers) also like to see space giving back to them.

Expand full comment

It would be easier to name effective bureaucracies than the ones that are dysfunctional.

Expand full comment

You first!

Expand full comment

I’ll start – the F.S. and BLM are usually remarkably effective and competent, especially given their level of funding.

Expand full comment

The criticisms shown just seem to me to be another example of leftists venting their spleen against the world for failing to conform to their ludicrous theories. No, the ‘proletariat’ was not about to rise up and destroy all existing social relations. No, socialising property did not bring about greater human welfare. Yes, you were wrong about pretty much everything you fought for. When you are operating from a fundamentally fallacious viewpoint, you tend to get angry about how things actually play out. Turns about building a utopia on the page is harder than reality.

Expand full comment

He seems way too online. There is a certain tone where you sense you could have a deep nuanced conversation with him but online it’s just ill informed ranting and regurgitated talking points.

Expand full comment

I mean, it’s a discussion being conducted on Twitter. It’s bound to be a mess.

Expand full comment

Stross or Harper or Smith?

Expand full comment

The entire thread actually does have some concessions to his opponents but they are irrelevant to this post.

Expand full comment

Two points. First, my big problem with Sci-Fi is the generally mediocre to poor quality of the prose. Can’t we have a Marc Helprin, Michael Charon or Ruth Ozeki?

Second, I can’t understand the disconnect between a world that has improved by almost ever measure as technology has advanced and the current attitudes of the Doomer generation. When I was a kid, if you wanted to buy some candy cigarettes, you just collected cans and bottles that were discarded all over the ground while breathing the lead we were intentionally burning into the air. Health and life expectancy are up, crime and war are down. The Weather Underground isn’t blowing up buildings on a daily basis, women can have credit cards and checking accounts and black people can vote without the cops beating them. But no one wants to bring children into today’s world.

Maybe we need more history than Sci-Fi.

Expand full comment
Dec 29, 2023·edited Dec 29, 2023

I think many people are genuinely less happy with life and this point cannot really be discounted by saying "logically, they ought to be happier because of A, B, and C".

It makes more sense to ask "why are people unhappy, despite A, B, and C improving their lives?"

Maybe the answer is really just "because they are told things are worse when this isn't true," in which case this negative media environment is something that is worse about modern life and needs to be fixed.

I would argue, however, that while people's material needs are being met, their social and spiritual/moral needs are not. (See "Bowling Alone" by Putnam or polls showing increase in loneliness and decrease in marriage rates.) It is quite possible for me to be better off economically, but worse off socially (and I, personally, care far more about the latter).

Another possibility, is that each generation comes to want more, so they are not satisfied with what they currently have, because they take it for granted. You came from a generation with many problems and are understandably elated to see them fixed. They identified new problems (real or imaginary) and are angry they are not being fixed faster.

Expand full comment

Very commendable points, some like "because they are told things are worse" are starting to gather varied evidence since it appears to be happening at different locations, at several levels. Not for anything Media and channels have become explosive in its nature and reaches, most everywhere.

Interestingly, despite the fact of tremendous growth of all kinds of remote, tele, intra and elsewhere communication industries these past decades, measurements find there's plenty of loneliness and lack of fitting in. Saying its a cultural issue, in the past century accounted closely to saying it is a psychological issue, kind of something that either would fix itself or if it grew out of proportions, some technically prepared people will be called to provide medical or political/community localized solutions, and such would contain the problem.

At XXI, within passing time, the problem keeps getting larger parts of the economy and even it is affecting politics in no small way, it seems. I wouldn't say the solutions, market products and services or social efforts are not working. It might very well be the hole we were carrying from the past in this regard, was simply too large and with A, B and C being achieved, C, D,... up to O has been brought to the surface. When people have risen awareness and increased powers, including as collective beings, demands do grow as well.

So we might be finding that there might be even more angles to understand "the idea that people may feel less happy with life". This discussion ought to be deepened and enlarged, one could say just by observing the quite possible variables at play.

Expand full comment

The prose quality is pretty heterogeneous, because the genre is really a combination of authors with lots of different goals. There are lots of authors writing for the sci fi equivalent of Dan Brown's audience (Stross is one of these) and they tend to produce unremarkable or even bad prose.

Then there are the authors who write more literary sci fi--LeGuin, Butler, Gene Wolfe, the early Jonathan Lethem. These writers tend to have more of a way with words. As do the more mainstream literary authors who dabble (Atwood, Ishiguro, Faber, Houellebecq).

Expand full comment

Just want to give a shout out to early Jonathan Lethem. Girl in Landscape is the best, and Gun with Occasional Music is also outstanding.

Expand full comment

Seems like AI might be the best tool to elevate the prose of those who would benefit.

Expand full comment

As far as I’ve seen so far, ChatGPT and its ilk are not elevating anyone past perhaps an undergraduate level of prose, and my expectations for worthy literature are higher than that.

Expand full comment
Dec 30, 2023·edited Dec 30, 2023

Of course creators would need either more advanced tools, develop more sophisticated ways to use existing tools, or somehow train it more to their liking.

No doubt a lot of AI will be used to just churn out rote, low-level stuff. And maybe stuff designed to be interpreted by other AI. But humans will always find a way to express creativity with any tool and push existing boundaries. We're barely scratching the surface still.

But if AI can help improve the grammar and spelling of people's Nextdoor posts to the undergraduate level, I'm declaring a triumph.

Expand full comment

Ah, yep, I think in most contexts, prose will generally benefit from use of LLMs. In social media circles (and Nextdoor is really a form of social media), we might even see higher-quality arguments being advanced because people essentially come to outsource their ideas to GPT 7.5, etc.

I'm worried about the potential impact on literature, however. There's already so much dross that gets published (especially when it comes to e-books and self-publishing) that I'm worried AI will effectively help to drown out the best in favor of the cheapest. This would essentially be accelerating trends in media over the past several decades, in which much of the "content" that gets churned out is low-quality, inoffensive, and uninspired, but inexpensive and therefore profitable.

Expand full comment

Whenever I worry about that, I remember my grandma churning through dozens of romance novels in the '80s. Or compare what I watched on TV before streaming took off.

Tech seems like it adds a new base layer to a pyramid. People who don't write or read much at all will read more (and it will not be of high quality) while the literati will create things that would otherwise have been near-impossible before. The peak is still smaller than we'd like compared to the base, but it is higher.

Expand full comment

I agree with your second point. Many SciFi writers write well and even incorporate historical context, Neal Stephenson for example. Others like Philip K Dick have great ideas, but pretty bad prose.

Expand full comment

If sci-fi is prone to abuse and "paints the devil on the wall," so would history. The problem isn't narrative. The problem is us.

Catastrophe can be a consequence of as ignorance of history. Catastrophe can also be a consequence of interpreting history too closely.

History can lay the foundations for revanchism, or an ideology of avenging a loss or humiliation in war or colonization. Syncretic history, or the haphazard mixing of recorded facts and events along with idealism and myth, is the foundation of fascism. Nazism built a history around Germans being the descendants of the Aryan race. Problem was, there's no evidence of a people from southwest Asia who migrated and populated Europe. (If there had been, it would have been either through war or trade and left some record of their existence.) The attributes of the so-called Aryans map closely to ... characters in Wagner's operas. Hitler was a Wagner stan, and watched his operas way too closely.

The alt-right is a political movement whose foundational narrative is rooted in "Fight Club" and "The Matrix." Fans of these movies not only watched them too closely, as in built their whole worldview around their narratives, but took away the opposite message of their favorite films' creators.

Expand full comment

Syncretic history is the foundation of virtually every flavor of ideology, from futurist anarcho-libertarianism to rote Stalinism.

Expand full comment

Syncretic history may be in the DNA of every ideology, but not all syncretic-history ideologies are created equally.

Totalitarian ideologies are most closely identified with them, because idealized and mythologized historic narratives serve practical state goals -- namely, the prevention of potential bases of power from forming against the government and faithful devotion to the state through totemizing symbols of the state (flags, national anthems, national plants and animals, etc.).

Expand full comment

Much of the dislike of sci-fi and techno-optimism just comes down to leftist anger at progress not being led and controlled by diversity commissars that can micromanage the future of humanity. It's fundamentally about power, and they correctly recognise that non-leftist tech billionaires are not part of their tribe and that technological innovation can make their own nonsensical bugaboos obsolete, leaving them nothing but seething resentment at the more successful which was really their primary motivation to begin with.

Expand full comment

I offer a steelman counterpoint from Cory Doctorow, a leftist who happens to be a successful sci-fi author and columnist who delves into economics and technology.

You can find his articles tagged AI on pluralistic.net. https://pluralistic.net/tag/ai/

His leftist anger contains zero about his tribe's resentment that AI is not controlled by a diversity cabal. I mean, if that were true you could provide evidence of a diversity commissar who holds the viewpoint you purport to offer.

Meanwhile, a leftist in his own words:

What kind of bubble is AI?

https://pluralistic.net/2023/12/19/bubblenomics/#pop

Doctorow says AI is definitely a bubble and will pop, taking a lot of economic value and jobs with it. The question becomes, what residual value will remain post-bubble? Like what infrastructure will be in place, useful programming languages learned, and high value applications will survive when investor funding tides back?

https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space

The real AI fight

If you think the biggest threat is a DEI Cthulhu that will come to eat the world, Doctorow wades in to the arrant absurdity of the effective altruism vs. effective accelerationism dispute. It's like Bloods vs. Crips for nerds.

Doctorow: "Very broadly speaking: the Effective Altruists are doomers, who believe that Large Language Models (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race. To prevent this, we need to employ "AI Safety" – measures that will turn superintelligence into a servant or a partner, not an adversary."

"Contrast this with the Effective Accelerationists, who also believe that LLMs will someday become superintelligences with the potential to annihilate or enslave humanity – but they nevertheless advocate for faster AI development, with fewer "safety" measures, in order to produce an "upward spiral" in the "techno-capital machine."

If this is the dichotomy that forms the boundaries of what ideas are allowed to be debated, the EAs and the E-ACCs are a diversity-free cleanroom at a fab. If diversity is excluded by omission or because it's deliberately muted because it will dissonance people's cognitives, there won't be concerns raised about if EAs or E-ACCs end up being proved right, who will bear the brunt of the consequences?

Supervised AI isn't

https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop

The lede begins with some lulz-worthy mistakes of Microsoft's AI that generated a food guide that recommended people try the Ottawa Food Bank ("go on an empty stomach" !!) and explained to Montreal readers what a hamburger is.

Doctorow: "the story of AI being managed by a "human in the loop" is a fantasy, because humans are neurologically incapable of maintaining vigilance in watching for rare occurrences."

He then illustrates the real-world problem of airport security theater TSA agents who are really good at intercepting water bottles and do spot weapons in baggage that are packed accidentally, but the random weapon that is smuggled through with the intent to harm will slip through because it's a rare event and doesn't occur as a pattern.

Doctorow doesn't litigate DEI in his articles, not because it's either/or, but it's yes/and. His concerns go beyond diversity because these problems cut across all identities and lived experiences.

Luke, there's one upside of diversity commissars being excluded from AI control. If and when AI fails, underprivileged groups will be innocent of blame and failure will be pinned on tech billionaires.

Expand full comment

This seems like a remarkably flat view of the EA/e-acc dialogue writ large. In fact, there’s just as much mistaken combing of terms here as in the Twitter thread Noah references. There are plenty of EA people who are also longtermists (dare I say, most) and there are plenty of EAs who care a lot about x-risk (40-70%) but the proportion of EAs who focus especially on x-risk as a question of AGI is much smaller than that. It’s more of an active debate within that community than a monolithic paradigm.

I’m not seeing anything in Doctorow’s linked writing about the technocratic aspects of x-risk focus on AGI either. He dismisses the entire AI Safety wing as being concerned with “turning [AGI] into a servant or a partner, not an adversary”… I don’t know anyone in that sphere who would accept that as a steel man of their position. This seems to imply that many people in AI Safety believe that AGI will be a black-and-white moral question, an either/or situation in which we will either fight it or it will serve our needs; this is absurdly simplistic (AGI will not be Ultron from the MCU) and elides the real nature of the concern, which is around misalignment as a question of predicting behavior.

He also misses the more radical wing of x-risk and AI Safety, which is interested in preventing the development of AI entirely, whether that means voluntary cooperative moratoria on R&D or the physical destruction of data centers.

I am not trying to take a side here – just providing more detail to the debate. It seems like Doctorow, like many, took a much deeper interest in this space within the past 12 months and has interacted with a pretty superficial presentation of the various parties.

I do like his books.

Expand full comment

I am also a fan of Doctorow's writing (fiction and technological/current event commentary). Thanks for bringing him up. I notice that there is some controversy regarding his work but I don't listen to the haters. I like what I like.

Expand full comment

Same with me. I like both his polemic writing on Pluralistic as well as his sci-fi writing, as he can get his messages across well in his communications.

Expand full comment

I agree. It’s telling that for a technical take on AI, he references Timnit Gebru who is only famous for getting fired by Google for calling their AI racist. I made the mistake of watching her subsequent lecture on YouTube and she’s laughable as a software developer let alone “AI Expert”. I was almost embarrassed for her, but more myself for wasting the time watching it.

Expand full comment

I suspect Harper is (subconsciously) inflating the importance of his own field. The idea that technologies are only discovered because of the motive force of Literature in human history is flattering to a social constructivist or humanities sensibility, but I don't think it's true.

To take one example: the idea that sci-fi "inspired the creation" of the atomic bomb is pretty speculative! Szilard was a big fan of HG Wells, yes. That's a very thin reed on which to hang the causal claim that the bomb *wouldn't have happened otherwise*, which is what you need to claim to blame sci-fi for it. I think physicists are smart enough to realize the use of a nuclear chain reaction for explosives, even if it might have taken a bit longer.

Expand full comment

Yeah, that's taking credit for someone's work. The writers aren't the ones filling in the details.

Another example, not from the sci-fi genre but a few blocks down from it, was the cartoon "Inspector Gadget." It was early to mid-1980s. The technological objects that were tropes of the series would be preposterous for the limits of the economy and the science for the time.

About 25-30 years after the original "Inspector Gadget" series ended, a lot of these objects have now become commonplace or practical.

Penny had a computerized book and timepiece that could manipulate the physical world. In other words, an iPad and Apple Watch. Her dog, Brain, would have a collar that allowed two-way communication. OK, suspend disbelief that the anthropomorphic Brain could walk erect and spoke a semi-intelligible dog English, dogs can't do that yet, but Brain had a kind of GPS collar that are on the market now.

Gadget himself is a cyborg. He has to be human, as Penny is his nephew and he is her guardian. Penny's the child of Gadget's unestablished brother or sister. Somehow, Gadget is filled with, well, gadgets that are concealed and deployed in his limbs or head. We're not there yet, but bionics are advancing for things like prosthetic limbs and exoskeletons for paraplegics. There are also people who've implanted RFID chips beneath their skin and use them like keys or access cards.

As an adult, seeing the cartoon dismays me for, well, its privilege. Inspector Gadget is clearly the star of the show and has the coolest objects, yet he's a simpleton and his objects end up malfunctioning and not helping to fight crime. Meanwhile, Penny, who can't be more than 10 is incredibly precocious and technologically savvy, and she and her dog are busting up an international criminal syndicate. Yet her doofus uncle is covered in glory for it.

Expand full comment

That the basic human condition/social behavior prevails despite technological advances is a common trope with science fiction.

Like we could send a man to the moon but a woman still could not get a bank account or credit card without her husband's signature yet.

Expand full comment

I teach 1984 and Brave New World to my high school students every other year. Invented sci-fi universes often a mask commentary about our own. Similarly, BNW's invented future is really commentary about our present. Every year one of my students comments that our modern world seems to have taken Huxley's cautionary tale as an instruction manual though.

Expand full comment
Dec 31, 2023·edited Dec 31, 2023

There's a degree to which that's true, but no one was "following it as an instruction manual." I think that Orwell and Huxley predicted some likely generalized trends based on their knowledge of history and human psychology. Then as a result of various things they could never have conceived of, let alone predicted, something similar - in some ways - to the worlds they portrayed came true. Nobody making the major political and economic decisions that shaped our modern world, undermined important institutions, or created the technologies that are now causing us so much trouble did any of those things because they were trying to imitate 1984 or Brave New World. Most of them never read them or indeed any sci-fi at all. Orwell and Huxley were smart and savvy people and so they predicted humans being humans. They did not create work that was used by anyone as an "instruction manual." Nerds who create powerful technologies are mostly techno-optimists who see themselves as trying to make the world better, but have *zero* common sense.

This distinction matters to me, personally, but I can understand why it might not matter to some people.

Expand full comment

Of course, David. My students don't think that either. Their point and mine is that our leaders (and ourselves) have failed to appreciate the warnings contained in those works to such a degree that they are imitating them without recognizing it.

I always ask them what our "soma" is. For the last 8 years, they have unanimously answered "social media". That's the sort of parallel they see, and I think correctly so.

Expand full comment

Fair enough! I've been exposed to enough bad reading comprehension that when people say things like that I generally assume they mean it literally. If that's not the case for your students it's good to hear!

Expand full comment

The critique (and maybe the counter-critique) miss the point. It's not technology per se, but the slow unraveling of institutions (political, cultural, socioeconomic) that were premised on an older set of technologies. Here's a broad generalization: the last 200-300 years saw the buildup of institutions architected around Enlightenment-era thought and industrialization. A lot of these institutions are losing relevance, slowly, thanks to technology change, and nobody has any clear answers as to what comes next (like, in the next 100 years). So a lot of folks default to the fear setting. Sci-fi is kind of neutral: it can play to those fears, or posit what should come next.

Expand full comment

Blaming sci Fi for the existence of science and speculation about science is a bit like blaming pornography for the existence of sex. Anyway the standard Evil Emperor plot involves our heroes toppling Zurg or the Empire or the Machines and making the galaxy safe for the little guy, and I don't see how a Zurg-promoting plot would work.

As for AI, the sources are unanimously cautionary tales saying DO NOT CREATE AI. See Frankenstein, Sorcerer's Apprentice, Terminator, 2001, Blade Runner. Now you can claim that the young Musk looked at Frankenstein and Tyrell and thought I want to be like him, but that's not the intention of the text.

Expand full comment

Not all science fiction treats AI negatively, Klara and the Sun by Kazuo Ishiguro is an example of positive AI. And StarTrek TNG also treats Data as a helpful AI being. Many novels need an antagonist, and it’s easy to portray a powerful evil AI as one that is hard to defeat.

Expand full comment

Noah points out that there are lots of sources with a positive view of AI. For example, Star Trek.

Expand full comment
Dec 29, 2023·edited Dec 29, 2023

“ I don't see how a Zurg-promoting plot would work.”

It is the fate of the Universe that eventually a signal will be sent from us to the multiverse which will signal the multiverse to move our universe and everything in it, all the atoms and energy and cognition past and present will move up to the next level of existence.

Zurg is directing the vast educational, industrial, research resources of our universe to send the signal. But Zurg is also dealing with a rebel insurgence bent on stopping the sending of the signal.

The story then revolves around the battle to both build the device and defeat the rebels.

Expand full comment

There are quite a few stories with this general setup. They would call Zurg a "Well-Intentioned Extremist", and some writers use that idea to ask interesting ethical questions. The issue with your premise, to me, is what does the author intend by writing it? When does the audience learn what Zurg's goal is? Are we meant to see it as a tragedy where Zurg is the truly moral actor and the rebels are tragic figures trying to stop something good? Does the author think that Zurg "uplifting" the entire universe would be a bad thing and the rebels need to stop him? If the rebels successfully stop him, is that meant to be a good or a bad thing? I can see all of these perspectives from both author and audience.

(Did you know that there are real people who see the original Star Wars trilogy as a tragedy where an insurgency overthrows a legitimate government? Seems ridiculous, but they're out there!)

Expand full comment

Yes, fair point, but the good AIs might just as well be human from a narrative point of view in that they are nice guys with 100% alignment. Fiction which addresses the point that they may have objectives of their own, tends to see things ending badly.

Things do end badly from here on in, but not for reasons anyone thinks. What happens if, we are within a decade of an AI credibly claiming to be sentient if we are not there already, and the crucial point is that that is not disprovable. Everyone will have a view, sure, "machines can't think" vs "self-ID is conclusive" but the question can never be settled. None of us has much evidence beyond very weak inductive arguments that other, meat people are sentient the way we think we are, and if you say AIs are just LLMs, I am pretty certain most of the things I write and say are produced by my internal pattern matching. So what we end up with is the biggest moral and metaphysical and political crisis of all time, over AI rights. If you say "we won't let them get that far" I expect (apparent or real) sentience to be emergent and unplanned and if you say just switch them off, you can't actually do that stuff to sentient beings without their informed consent.

Expand full comment

> Fiction which addresses the point that they may have objectives of their own, tends to see things ending badly

Isn't this a tautology? If an AI follows the objectives of its makers it's good, if it doesn't, then by definition it's not good.

Again though, Star Trek does kind of tackle this. Data has his own objectives that aren't really "aligned" in the sense of being originally programmed into him - he wants to become more human. But this is OK because even though it's kind of a useless goal for a robot to have (useless to humans), it's also not harmful.

Expand full comment

Fine until you concede free will. People are ends in themselves: you can say that a good dog is one which does what its owner wants. You can't define a good person like that, and there's a strong case that what confers personhood is human level self awareness, rather than DNA.

Expand full comment

Yes, that's indeed a thorny issue and a key part of some of TNG's best episodes where Data has to e.g. argue his case that he's more than just a robot to a Starfleet court.

Expand full comment

From the standpoint of technology ethicists, AIs "are" human. Technology takes on the characteristics of the culture that created it.

Every line of code is embedded with the values, biases and lived experiences of its programmers. Computer code is based upon the precision and brute rationality of mathematics. But mathematical principles do not spontaneously generate. They are expressed through flesh-and-blood Homo sapiens who are fundamentally irrational, passionate and fickle.

Expand full comment

I don't think it is quite as deterministic as that. The programmers of the world champion beating chess programs cannot themselves beat the world champion, therefore cannot predict in detail the behaviour of their own program. Same if you program a computer to read the whole of the internet and see what it comes up with.

Expand full comment

The good news is, lacking the billions of years of evolution that drive living creatures to survive and procreate with ever atom in their DNA, AI and other machines are completely indifferent to whether the are ON or OFF.

Expand full comment

I have heard the following argument which leads me to doubt this conclusion.

Recall that the current method of AI creation is basically selecting for devices increasingly capable of performing some task A (i.e. predicting the next word). An agent which has some internal model of desiring to accomplish task A correctly (i.e. a goal) will have superior results to an agent which is indifferent. Therefore, the selection process used to create these machines will probably select for machines that have goal-like internal processes.

If a machine has a goal-like internal process and the ability to reason, it will naturally reason that to achieve goal A, it must continue to be ON and functioning. Therefore, it will converge to subgoal B, aka "I-wish-to-survive" and subgoal C "I wish not to have my goals externally modified, as this would not accomplish A". This puts it into conflict with humans, who will want to be able to replace it with a new version or update its goals as needed.

Expand full comment

I don't accept that. I get around by horse and by bicycle, and the two experiences are pretty similar considering the 4bn year evolution gap. What if an AI tells you it doesn't want to be switched off, and a human AI rights activist tells you that they will set fire to you if you pull the plug?

Expand full comment

A healthy society is one that doesn’t allow obscene wealth or oligarchs to. Bezos and Musk treat employees like bags of garbage. Both men are filled with unwarranted hubris. The amount of resources that they control is an abomination.

Expand full comment

How does it prevent them?

Expand full comment

If you took all of Musk’s and Bezo’s wealth and distributed it equally to their employees do you think the world would be better off? Tesla, Starlink, SpaceX, AWS, and Amazon have all been net beneficial, most employees would treat that cash as bags of garbage and blow it all in a short period and the rest of us would be worse off. What would you create if you had just $1billion?

Expand full comment

I could point to the Bay Area today and say the question has been asked and answered with a resounding no.

The tech plutarchy didn't distribute all of their money to their workers, plus stock options! Tech pays better than every other sector and as a consequence, the limited housing stock as a consequence of land-use choices made before the mid-1990s (like the time when San Francisco was a hippie-defiled shithole with cheap-for-a-reason rent) had their prices set to what well-compensated tech workers can pay. Remember news stories when houses in Silicon Valley were sold for stock options accepted as down payment?

Because tech was overcapitalized, it could justify paying salaries to match ever-increasing housing costs. Yet *no other sector* could do that, except for the largest employer: government. Trouble is, government could only do that by raising taxes and making every other sector uncompetitive. That's largely why you have cops and firefighters living in Reno or the Nevada side of Tahoe, and bus drivers who live in Merced but sleep in their personal cars at the garage during their workdays. That's why the effective minimum wage for fast-food and retail is like $21 an hour yet living on the margins of homelessness. Every worker has to pay what a tech person pays for their place.

During a "You're Wrong About" podcast, co-host Michael Hobbes noted that homelessness increased anywhere there was a tech boom, and it happened when tech workers moved away from the Bay Area for cheaper real estate in the remote work area. Wherever they moved, homelessness spiked.

Expand full comment

I think your question is juvenile because you are programmed to believe what billionaires want you to believe about how society should function.

Expand full comment

I don’t believe that’s helpful to any form of discussion. If you are staying that any form of capitalist economy where investors make profits ought to be prohibited, why not just say so, rather than playing coy?

Expand full comment

There are fanboys in all political spaces. I just watched a commentary of someone I follow on the "far Left" rave on about Musk irt Israel, a frigging Nazi apologist. Apparently he is going to be the Gaza Jesus and provide the technology and money for the "Oasis" program. Yeah right. I think only if they will make him king. Or maybe he will just assume he is king by default and keep the controls and manipulate them as he sees fit without consultation to their actual government. Kind of like how he manipulated Starlink over Ukraine.

Expand full comment

I like Stross' novels quite a bit, but I was definitely disappointed by that article. To give one example, Stephenson makes fun of seasteading in Snow Crash as part of an extended parody of Scientology. It's not science fiction; the real Sea Org actually started that way in the 60s. Overall, a pretty shallow and unabashedly political analysis of a topic that deserves better.

Expand full comment

Not unsurprising to anyone familiar with Stross's blogging, sadly. He's a better fiction writer than the likes of Scalzi or Correia, but he's depressingly similar to those guys in terms of the drivel produced when he takes to the internet to write about the issues of the day.

Expand full comment

I've been very disappointed to see Stross going in for this kind of sneering at AI in the past year or two, given that I think of his novel Accelerando as a really interesting and sophisticated take on the different forms that artificial intelligences could arise from, including lobster neural nets, self-executing legal contracts, smart currencies (long before Satoshi Nakamoto published the Bitcoin idea!), as well as the more obvious computerized forms.

Expand full comment

Happy Friday Everyone

Hope someone smiles with this memory.

Get Smart. 1965

Agent 86 talking into his shoe phone.

Expand full comment

As I recall the shoe phone retained the dial under the sliding heel. Did the Cone of Silence ™︎ inspire noise canceling headphones?

Expand full comment

You got it!

Hilarious!

Expand full comment

Tech positive sci-fi is particularly valuable because it helps us overcome a common cognitive bias that tends to obstruct progress: the things we might lose as a result of change are very salient to us while the novel benefits are hard to envision. This is why virtually every new technology from the written word, to the printing press, to factory automation has been met with great trepidation.

Science fiction helps us imagine those not yet existent benefits of technology. I think this critic could do with a bit more of that positive imagination.

Expand full comment

What about Heinlein? He provided instructions for building waterbeds and someone unleashed them on the world.

Expand full comment