When I started this new Substack blog, I wanted to make it explicitly techno-optimist. The 2010s felt like a decade in which many Americans — especially in intellectual circles — undervalued and dismissed the promise of technology, deriding it as “solutionism”, the plaything of out-of-touch techbros, or late capitalism’s scheme to extract more money from hapless consumers. Meanwhile, economists were increasingly enamored with the idea of a long-term technological stagnation, postulating that we had picked most of the Universe’s low-hanging fruit.
And then 2020 happened. I watched cutting-edge technology help sustain our society through the dark days of the pandemic, and ultimately defeat that pandemic through near-miraculous vaccines and treatments. At the same time, I saw the long-term menace of climate change suddenly appear solvable, thanks to a near-miraculous explosion in green energy tech. If that’s solutionism, I’ll have another, and make it a double! And I’m far from the only person who felt this way.
To some people it may sound ideologically blinkered to gush about technology after a year in which tech stocks took a nose-dive and one whole sector of the industry turned out to be mostly a self-devouring ouroboros of financial scams. But the profits of tech companies are not the same as the social benefit of technology — witness how little profit solar manufacturers make, even though solar is changing the world. And honestly, crypto isn’t much of a technology yet; smart contracts may someday change the world, but the scams and Ponzis that exploded this year were innovative only in their techniques of social engineering. More generally, technology doesn’t always mean the same old things improve year after year, forever; one boom ends, but another begins.
A more troubling note is that the burst of productivity that followed the pandemic seems like it might have petered out:
My bet is that this is political rather than technological; it probably reflects expensive energy and other disruptions in the wake of the Ukraine war. Fortunately those disruptions are fading; labor productivity did bounce back in the third quarter as energy prices fell. But this is worth keeping an eye on; there are lots of other factors that could weigh on productivity growth, such as plateauing education levels and U.S.-China decoupling. (The drop is also subject to statistical revision.)
Anyway, whether we end up getting a “roaring 20s” or not, I see lots of technological developments that are either changing our world in major ways already, or seem likely to change it soon.
The A.I. breakout
Obviously, one of the biggest pieces of tech news this year was the release of a bunch of generative AI apps, including art applications like Stable Diffusion and Midjourney, and text generators like chatGPT. The mania this sparked was electric — I’m not sure I can recall ever having seen this sort of mass interest in a new technology from outside the world of nerds and early adopters.
That interest doesn’t mean that everyone was happy about the new tech; lots of people were wonderstruck, but many others found reasons to be outraged or terrified. You can witness the dichotomy in the replies to this guy who co-wrote a children’s book with AI:
The negative reactions seem pretty misplaced. Some people accused the guy of “stealing” the work of other artists, because AI art programs often train themselves on copyrighted data. But that seems no different than a human artist gathering inspiration from looking at other people’s works. And we should celebrate tools that open up creative fields to more people; without AI, what are the chances this guy would have created a children’s book on his own?
Anyway, I think that co-authorship is exactly the right model for what a lot of human beings are going to be doing with these new AI tools. Roon (an AI engineer) and I wrote up this post about how we expect this to work, and why human workers won’t be rendered obsolete:
But Roon and my post only scratches the surface of what might be possible with new AI tools. Sam Hammond has some interesting thoughts about where the current AI revolution might take us:
Of course we shouldn’t only focus on generative AI — good old-fashioned predictive AI is still advancing as well, as demonstrated by DeepMind’s amazing progress in protein folding. And there are many cases where the line blurs — if an autonomous robot uses AI to move around, is it predicting where it should go, or generating ideas of where to go? Do we care?
But the key reason to be optimistic about AI is that it isn’t slowing down. In the past, AI development was marked by repeated “winters” in which both interest and technical progress seemed to slow down. Some researchers, like Gary Marcus, have been predicting another AI winter soon. But the field seems to have blown right past those predictions, both in the quantitative and qualitative senses. Here’s a good thread looking at some quantitative measures of AI interest and performance:
The one indicator of a slowdown is that ImageNet training costs have slowed their decline since 2018. So we’ll see. But there’s also a lot more to progress than curve fits — innovation proceeds by applying technologies to new problems, and here the explosion in generative AI applications really stands out. We’re just not running out of things to have AI do for us.
In fact, big new things are in the pipeline. ChatGPT, which so wowed the world, actually uses a fairly “old” technology called GPT-3, which was released in 2020. But OpenAI, the company behind GPT-3, is hard at work on a new engine called GPT-4, and rumor has it that it represents another substantial improvement.
Now, I don’t want to be a Panglossian about all the effects of AI on society. Yes, the “robots taking jobs” thing is overblown, but there are tons and tons of historical examples of people using powerful new technologies for destructive purposes. ChatGPT-type programs will obviously be useful for mass disinformation by bad actors, as well as for spam, and there may be an arms race between generative AI and the predictive AIs that are used to catch and stop them. Furthermore, the Ukraine war is showing just how important drones are to modern warfare; when these become AI-driven instead of human-piloted, it will be scary.
That said, I think there’s enormous potential for AI to do good for the human race. At its core, AI, like computing in general, is about saving mental labor. If it extends the powers of our minds the way physical technology extended the powers of our bodies, another productivity boom is probably ahead.
The energy revolution rolls onward
I talk about energy tech a lot, including in all of these techno-optimism posts, so I’ll just chronicle how much my optimism has been warranted. To start with, the solar revolution is going from theoretical to actual, with a huge surge of global investment. China and the EU are out in front here, but the U.S. is ramping up too, thanks in part to the Inflation Reduction Act. Here’s the IEA’s latest blockbuster report about global renewable electricity generation. And here’s a thread about the report:
What really stood out to me was that the IEA, known for its hilariously conservative (and wrong) solar forecasts in the past, now predicts that renewables as a whole will actually generate more electricity than either coal or oil five years from now:
Solar is forecast to represent the bulk of the new additions.
That’s pretty amazing. Note that this is exactly what people who looked at the solar cost curve were predicting for years. A lot of people found a lot of reasons to doubt that plunging costs would lead to actual soaring adoption. In this skeptical environment, being able to read some simple curves and make some simple deductions was kind of a superpower.
Which is important, because battery cost graphs also show equally amazing cost drops for batteries. Those will help resolve the solar intermittency problem at the daily level, though not at the seasonal level. Luckily, seasonal storage is a good job for green hydrogen, which I wrote a post about a couple months ago:
And hydrogen will also probably be useful for a number of other applications, which is cool.
Remember that solar cost drops are important not just because they offer clean electricity, but that they offer the promise of abundant cheap electricity. Digging minerals out of the ground is an activity that tends not to get much cheaper over time. But mass-producing simple things like solar panels does tend to get cheaper, which is what we’ve now seen for decades. Here’s an interesting paper about which technologies do and don’t have learning curves:
This means we’re not just going to save the planet from destruction; we’re going to get cheaper electricity than we have ever known, as a species. We need to start imagining the things we could do with cheap energy — massive desalination, cleanup of pollutants, aluminum smelting, cheap manufacturing, etc.
Electricity is especially powerful because you can put it into a battery and move it around. Looking at the graph above, we see that batteries are also one of the technologies expected to have a learning curve. And although batteries do seem to occasionally suffer small temporary cost rises (including in 2022), eventually the learning curve has always won out in the end. Which is why I’m extremely excited about the potential of batteries to transform our physical world:
In addition to providing daily energy storage and replacing internal combustion cars, batteries promise a whole host of other improvements in energy transport. In particular, combining batteries and AI seems likely to herald a robotics revolution — we may finally get that Jetsons robot future we’ve all been waiting for. Batteries are also going to be useful for high-powered, cheap-running appliances — in fact, I even invested in one company in this space, called Impulse.
And even better kinds of batteries may be on the horizon. For example, a team at the University of Sydney just claimed a breakthrough in sodium-sulfur batteries, which could be made without the relatively expensive metals (especially lithium) that go into lithium-iron batteries. I don’t want to hype reports like this too much, but it’s a good reminder that battery technology isn’t just about learning curves and cost drops; there’s plenty of possibility for novel battery technologies, and lots of people are looking into them.
Oh, and I’d be remiss not to mention the recent breakthroughs in fusion. The U.S. Department of Energy’s National Ignition Facility, which creates fusion by shooting hydrogen with high-powered lasers to mash it together into helium, recently achieved a net energy gain. That doesn’t mean cheap fusion power is right around the corner — they need to get a lot more efficient than that, and solve some other technical issues too. But it does mean that fusion will at some point go from being a punchline (“always 50 years in the future!”) to a world-changing reality. Nor is the NIF the only group working on advanced and novel fusion techniques; I’m especially interested in Helion Energy’s approach, called aneutronic fusion.
Anyway, whether we get usable fusion or not soon, it seems certain at this point that a future of abundant cheap portable energy is on the way, one way or another. It’s time to start thinking about how we’ll make use of that bounty.
The strange biotech boom
Biotech is a strange area of technology, for a number of reasons. For one thing, the market is really strange, because purchases of most biotech have to go through the medical system; unlike in cyberpunk TV shows, you can’t just go buy a new arm from the convenience store. That tends to limit innovation, because it takes a very long time to get products to market, and markets are highly circumscribed. (The FDA is working on improving the situation, but the basic structure will always remain.)
Biotech is also strange because the big advances are happening in a bunch of different directions — mRNA vaccines, synthetic bio, stem cells, Crispr, etc. These are all being enabled, to some extent, by cheap gene sequencing combined with cheap computing (and maybe soon with AI, as DeepMind shows). But there is just a whole lot of human brainpower going into a whole lot of different kinds of biotech, and it’s apparently paying off.
The always-excellent Derek Thompson has a good rundown of some of the biggest discoveries and inventions of 2022, and many of these are in bio. For example:
Scientists created a mouse embryo with a beating heart, using nothing but stem cells. That’s kind of amazing; we can now literally create life. In other stem cell news, we can now create a human ear from stem cells and implant it in living people.
mRNA vaccines just keep rolling along. BioNTech is now conducting trials for its malaria vaccine, which could help defeat the deadliest endemic disease on Earth. Meanwhile, scientists at the University of Pennsylvania may have used mRNA to discover a universal flu vaccine.
Effective new cancer therapies are emerging at a dizzying pace. This year’s standouts used immunotherapy to make rectal cancer disappear, and a monoclonal antibody to fight breast cancer.
Meanwhile, steady progress in synthetic biology and Crispr gene editing continue, so I’m optimistic we’ll see some big results in these areas this decade.
Anyway, these are just three particularly exciting areas of technological progress. There are others — falling launch costs (thanks in large part to SpaceX) are transforming the space industry, quantum computing is still seeing steady advances, and companies are racing to create workable brain-computer interface implants. I predict that no matter what happens to the total factor productivity numbers, I’ll have plenty of new material for these techno-optimism posts throughout the decade.
So it sounds as if we should start to worry less about technological stagnation and more about whether progress in AI is happening too quickly to keep it safe.
I was surprised to see that when Matt Yglesias writes about AI risk, he gets very strong pushback from a bloc of his paid subscribers who think the risk doesn't exist and that it's all Luddite nonsense.
It was hard to tell how many of those people were conservatives predisposed to this view by all the propaganda about global warming being a hoax, and how many were just going a little too far with the technophile, "abundance agenda" framework.
Either way it seems clearly wrong. People don't seem to appreciate that even if there's some theoretical argument for why there can't be existential AI risk, you can accept the argument with 99 percent confidence and still think AI risk reduction is a very, very high priority. Why is there so much resistance to this idea?
If you're looking for AI slowdown risks, the imminent end of Moore's law is a big one. (Don't believe me that that's happening? Nvidia's CEO thinks it is: https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618).
We've gotten a lot of the recent AI progress via a faster-than-exponential increase in the amount of computation used to train the models. That can't be sustained, and it stops being sustainable sooner if hardware isn't getting better at an exponential rate. There are ways around it -- specialized (neuromorphic/sparse/etc) hardware, algorithmic breakthroughs, quantum computing -- but if those don't pan out or take too long, it's reasonable to expect the current wave of progress to slow down.
Of course, there's enough innovation here already to disrupt quite a few industries even if progress stopped tomorrow.