The Super-Scary Theory of the 21st Century (repost)
The haunting specter of techno-authoritarianism has only gotten more haunting
I’m traveling again today, so I thought I’d do a repost from the earliest days of this Substack. Back in late 2020 I sketched a theory of how modern technology might lead to a shift in global power that favored totalitarian countries over free ones. In brief, the basic idea is that social media might throw democracies into instability and chaos, while AI allows totalitarian governments to control their populaces more tightly than ever before.
Three years later, I’m still just as worried as I was back then. Pro-Palestine protests are rocking Western cities and college campuses, support for Donald Trump and other chaos agents remains strong, and social media “vibes” have contributed to American pessimism even in the face of a strong economy. In a post last week, I argued that Chinese control of TikTok and Russia’s influence on Twitter, combined with liberal governments’ refusal to block these incursions or to engage in active messaging of their own, constitute a dramatic collapse of liberalism’s position in the information war.
Meanwhile, despite its military’s poor performance in Ukraine, Russian society remains largely without organized dissent. And China, despite its economic slowdown, seems similarly quiescent. China continues to explore the use of A.I. as a tool of mass social control. For good in-depth reporting on this, I recommend Josh Chin and Liza Lin’s book Surveillance State. Meanwhile, AI deepfakes — probably released by Russia or its proxies — appear to have managed to swing an election in Slovakia.
The common thread in all these stories is that digital technologies that create order in autocracies are simultaneously creating chaos in democracies. Of course, it’s still far too early to tell if this pattern will persist — the effects of the printing press in 1980 were very different from its effects in 1580. But I’m very uneasy about the direction in which things are trending. A world where only totalitarian governments can lever technology to their advantage is not one I would enjoy living in.
So anyway, here’s my post from three years ago.
It’s important to note that I DON’T BELIEVE THIS THEORY IS TRUE. But I don’t fully disbelieve it either. It’s one of those scary big ideas that has a few pieces of anecdotal evidence to support it, but which would represent a huge change in the way the world works, so is probably unlikely. The purpose of this post is not to say “this is the way the world will work,” but rather “here is one scary possibility for how the world might work, and I’m a little more worried about this than I used to be.” It’s a warning, not a prediction.
So, that said…
The rise and fall of great powers
In his 1989 book The Rise and Fall of the Great Powers, historian Paul Kennedy tries to explain how geopolitical power switches from one country, regime, or empire to another. The basic upshot is that powers that are better at marshalling economic resources tend to displace powers that aren’t as good at it.
But what conveys economic strength changes over time. In the 1600s and 1700s, it was all about public finance — countries like France and Britain that had better administrative bureaucracies and were better at collecting taxes could pay for armies, while countries like Spain kept going bankrupt. In the 1800s, as the Industrial Revolution got underway, economic strength became more about manufacturing prowess. The U.S., Germany, and Japan ended up being manufacturing powerhouses, while the USSR did OK for a while and then sort of dropped off. Later, the U.S. harnessed the power of high-value service and tech industries to outspend the USSR in the Cold War.
So you can look at this as being a story about economics, or you can look at it as being a story about technology. In each of these periods, the dominant powers were the ones who managed to use their national institutions to take advantage of a specific technology that was very useful for the type of great-power competition that prevailed at the time. In the 1600s and 1700s, great-power competition was about buying armies (either mercenaries or professional armies) to go take bits of land from your rivals. Public finance, which was really enabled by information technology (the printing press), helped with that. In the 1800s and early 1900s, great-power competition was about raising a big army to invade and conquer your rivals. Manufacturing helped with that. Etc.
So in the 21st century, what will great-power war look like? It might look like the Cold War, with rivals building up huge arsenals of superweapons and staring each other down. It might be about limited conflicts between navies, or something like that.
But it might be about fomenting instability in your rivals and maintaining stability at home. Many people believe that the so-called “color revolutions” in East Europe and Central Asia played an important role in destabilizing the USSR and, later, checking Russian power. Some claim that those revolutions were fomented by the CIA. I think that’s unlikely, but whether it’s true seems pretty immaterial. The much more important fact, it seems to me, is that this sort of instability is a feature of the modern, interconnected world — and much more so in the age of the internet. The color revolutions were only a harbinger of what was to come.
The 2019 protests that rocked every region of the planet had no real unifying theme. They included separatist movements, protests against economic inequality, protests against authoritarianism, and even climate protests. The huge, unprecedented protests in the U.S. a year later were about police brutality. I’m not sure anyone ever figured out what the protests in France were about.
If there’s one “silver bullet” explanation for why protests are erupting all over the world, it’s technology. Social media dramatically lowers the cost of both organizing a protest and spreading a protest-related ideology. Martin Gurri’s The Revolt of the Public and Zeynep Tufekci’s Twitter and Tear Gas are essential reading on this topic.
Big protests create instability and can paralyze governments — or even, as we saw with the color revolutions, overthrow them. Great-power conflict in the 21st century might simply be about outlasting your opponents — holding out longer against the naturally bubbling forces of internal dissent.
So then the question becomes: If social media driven protests are a permanent feature of the modern age, what sort of institutions and technology allow governments to resist the resulting instability?
And I’m not sure we’ll like the answer.
The big China mistake
In the 1980s and 1990s, many Americans believed that a combination of trade, technology, and cultural exchange would make repressive societies choose freedom. In 1984, Apple Computer gave voice to this idea with its famous ad, in which a woman with a hammer smashes a screen dispensing Big Brother-style authoritarian propaganda.
The color revolutions, the fall of the USSR, and the spread of democracy seemed to validate this idea for a while. But China notably resisted the trend. In the 90s, the idea that trade would make the Chinese masses demand democracy and human rights from their government was a popular argument for letting China into global trade networks. Others confidently predicted that the internet would bring freedom to China.
Things didn’t work out that way. China crushed the Tiananmen Square protests in 1989, a wave of rural unrest in the 2000s, and the Hong Kong protests in 2019-20. Its “Great Firewall” effectively keeps out foreign information, while its army of censors and online propagandists keep a lid on domestic dissent. Chinese authoritarianism enriched itself through trade and defeated the internet in a direct contest of wills.
And Apple? It took the Quartz app out of its Chinese App Store in 2019, after the Chinese government complained about Quartz’ coverage of the Hong Kong protests.
So much for that cute little ad.
This, then, is the Scary Theory of the 21st Century: Perhaps the internet is not a tool of freedom so much as evolutionary pressure that selects for authoritarianism. Perhaps social media has changed the nature of great-power competition into an endurance match in which control of the internet is key. Perhaps every country that doesn’t implement its own version of the Great Firewall and the 50 Cent Party will eventually fall victim to waves of Twitter-generated unrest.
Now add A.I.
OK but it gets scarier. Social media control is about using traditional methods — government propagandists and legal control of the information ecosystem — to limit the effects of a new technology. But there’s another new technology that might allow authoritarian regimes to exert a level of control that 20th century totalitarians could only dream of. That technology is machine learning — or as we commonly call it, A.I.
China is a global leader in A.I. technology, behind only the U.S. But unlike the U.S., China has enthusiastically bent this new technology toward creating a deep and robust system of government surveillance and social control. Here, via the Atlantic, is a lengthy description of how the Chinese government has deployed an early version of this system (combined with substantial elements of old-school human surveillance and policing) to repress the Uighurs in Xinjiang. Some choice excerpts:
AI-powered sensors lurk everywhere, including in Uighurs’ purses and pants pockets. According to the anthropologist Darren Byler, some Uighurs buried their mobile phones containing Islamic materials, or even froze their data cards into dumplings for safekeeping, when Xi’s campaign of cultural erasure reached full tilt. But police have since forced them to install nanny apps on their new phones. The apps use algorithms to hunt for “ideological viruses” day and night. They can scan chat logs for Quran verses, and look for Arabic script in memes and other image files…Purchasing prayer rugs online, storing digital copies of Muslim books, and downloading sermons from a favorite imam are all risky activities. If a Uighur were to use WeChat’s payment system to make a donation to a mosque, authorities might take note…
Uighurs can travel only a few blocks before encountering a checkpoint outfitted with one of Xinjiang’s hundreds of thousands of surveillance cameras. Footage from the cameras is processed by algorithms that match faces with snapshots taken by police at “health checks.” At these checks, police extract all the data they can from Uighurs’ bodies. They measure height and take a blood sample. They record voices and swab DNA. Some Uighurs have even been forced to participate in experiments that mine genetic data, to see how DNA produces distinctly Uighurlike chins and ears. Police will likely use the pandemic as a pretext to take still more data from Uighur bodies…
When Uighurs reach the edge of their neighborhood, an automated system takes note. The same system tracks them as they move through smaller checkpoints, at banks, parks, and schools. When they pump gas, the system can determine whether they are the car’s owner. At the city’s perimeter, they’re forced to exit their cars, so their face and ID card can be scanned again.
There’s much more, so do read the whole thing.
This is a level of totalitarian control that would have made Stalin or Hitler green with envy. No civilization in history has had the ability to centrally monitor the faces and the personal communications of an entire populace. Nor have text and voice communications ever before been observable over centralized electronic networks. The internet makes universal surveillance possible, and A.I. makes it easier.
China still relies on a large army of humans to do much of this surveillance work, but as computer vision and natural language processing become more advanced, A.I. is gradually augmenting or even replacing the human snoops. The Atlantic article describes a system called City Brain that China is trying to build, which is intended to keep track of everything that happens in Chinese cities, from people’s movements and meetings to their store purchases to the trash they throw away. Integrate a system like that that with digital surveillance and natural language processing, and you could theoretically know practically everything about each and every citizen except the private thoughts inside their heads (and don’t think someone isn’t working on that!).
Of course, as the Atlantic article takes pains to note, these technologies are not yet mature. China’s A.I. surveillance systems are advancing quickly, but it will be quite some time — no one really knows how long — before Big Brother can be completely automated. So to all those angry A.I. researchers about to jump into the comments and say “THAT’S SCIENCE FICTION”: Yes. It’s still science fiction. But every year it becomes a little bit less science-fictiony.
The Scary Theory, therefore, holds that in addition to using their overbearing institutions to suppress social media, new authoritarian Great Powers like China will also be able to use emerging A.I. technology to achieve a level of totalitarian social control never before imagined except in dystopian novels.
Whether technological totalitarianism makes for an effective society, of course, is an open question. Being constantly surveilled and controlled might eventually make people all the more determined to rebel, even at the cost of their lives. Or it might make society listless, lackadaisical, unmotivated, and unproductive. Or it might ultimately just be a big headache and a big cost for not much benefit.
We don’t know yet. Which is why the Scary Theory of techno-authoritarian great power dominance is still just a speculative, scary idea. But it’s an idea that I have trouble getting out of my head. The more China goes from strength to strength while increasing its social control, and the more its democratic rivals seem to lapse into division and chaos, the more I worry.
There is no law of the Universe saying that freedom and democracy and respect for individual rights always triumph in the end — oHenry r if there is such a law, we don’t know it yet. For all we know, the arc of history could bend in any direction at all.
Anyway, just throwing that out there. Sleep tight, sweet dreams!
Update: Henry Farrell has been thinking about this exact issue for a while, and is highly skeptical that A.I. as we know it is a powerful enough technology to pull off the techno-totalitarianism thing.
Noah is equating 2 terms here that aren't the same: "liberalism" and "democracy". Democracy is a belief that the law should be largely dictates by the will of the people. Liberalism is a set of policy propositions (rights) that are not subject to infringement. We often describe Western societies as "liberal democracies" without without realizing the tension between these ideas.
Either can obviously be taken too far.
The example I use with my students is the society of 5 animals in which the 3 wolves vote to eat the 2 sheep and the sheep are expected to lay down and die, since the "the people" have spoken. (I used a similar example when I was a TA grad student involving men and women which I'm sure I could never use today.)
What we often fail to realize is that liberalism can also lead to most political decisions ceasing to be governed by popular consent and instead controlled by "liberal principles" (which usually means, the principles of the powerful.) Totalitarianism in the name of liberalism is a very real possibility, and one that appears quite likely in the West (witness Trudeau's response to the trucker protest last year -- distinctly illiberal yet cheered by the highest centers of power.)
Thus the EU insists that Hungary is "not democratic" (for having a non-liberal government) and Poland this year "returned to democracy" (for electing a liberal government). In each case, what the EU really means is that the respective country doesn't do what the powerful in the EU say "all civilized people ought to do". But Justin Trudeau gets a pass since he's a card carrying member of the liberal elite and thus "on the right side of history".
I share Noah's concern over creeping totalitarianism, but I think he's blind in his left eye to fail to notice it coming from that direction as well.
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
- Frank Herbert, Dune