Answering the Techno-Pessimists, Part 4: Science Slowdown?
Does it matter if ideas are getting harder to find?
This post is the fourth and final part in a series. The first part is here. The second part is here. The third part is here.
When I started this Substack six months ago, I made it explicitly a techno-optimist blog. A number of my earliest posts were gushing with optimism over the magical new technologies of cheap solar, cheap batteries, mRNA vaccines, and so on. But a blogger at a blog called Applied Divinity Studies wrote a post demanding more rigor to accompany my rosy projections, and putting forth a number of arguments in favor of continued stagnation. Heavily paraphrased, these were:
We’ve picked the low-hanging fruit of science
Productivity has been slowing down, why should it accelerate now?
Solar, batteries, and other green energy tech isn’t for real
Life expectancy is stagnating
I’ve been addressing these in reverse order, from the easiest to the hardest. Life expectancy isn’t actually stagnating, and life expectancy isn’t a good proxy for tech progress anyway. Solar and batteries are definitely for real, not just as green energy but as cheap energy. The stagnation in technology isn’t as bad as people think, and past productivity trends are not a great guide to future trends.
But today I’m going to tackle the most worrying stagnationist argument, and the one that’s the hardest by far to rebut: The idea that science is slowing down.
Low-hanging fruit and the rising cost of science
The basic idea of science stagnation is that the easiest discoveries happen first. 150 years ago, a monk sitting around playing with plants was able to discover some of the most fundamental properties of inheritance; now, biology labs are gigantic and hugely expensive marvels of technological complexity, and the NIH spends tens of billions of dollars every year. 400 years ago we had people rolling balls down ramps to study gravity; now we study gravity with billion-dollar gravitational wave detectors that require the efforts of thousands of highly trained scientists. And so on.
In 2020, four economists — Nicholas Bloom, Charles I. Jones, John Van Reenen, and Michael Webb — published a paper quantifying this principle, and the results are deeply disturbing. Across a wide variety of fields, they found that the cost of progress has been rising steadily; more and more researchers (or “effective researchers”) are required for each incremental advance. This is exactly what a “low-hanging-fruit” model of science would predict.
Now, one objection to this type of analysis is that science doesn’t proceed within a fixed set of research fields. Bloom et all. look at specific things like Moore’s Law. But while it might take more and more research input to fit more transistors on a chip, before the 1940s there was no such thing as a transistor at all! Moore’s Law only goes back to the 1970s (some people try to generalize it and extend it further, but it really wasn’t a big driver of tech progress until relatively recently). Some fields, like agriculture, are very very old, but new ones are being born all the time. Someday someone will draw a chart showing declining research productivity in CRISPr technology, or deep learning (in fact some people are drawing such graphs even now). But those fields have been invented within our lifetime.
In reality, technological progress probably doesn’t look like a fixed set of flattening curves, but as a constantly expanding set of S-curves. We don’t just discover ways to do the same thing better; we also discover new things to do.
But Bloom et al. also look at aggregate measures — the number of effective researchers in society as a whole, versus the growth of productivity. Here’s the key graph from the paper:
Even if you attribute the slowdown in TFP growth to things like slowing educational attainment, this graph still shows that we’re spending more and more on research while failing to produce any noticeable acceleration in productivity — running harder and harder just to stay in place.
That’s scary, because it means that eventually we’ll run out of bodies to throw at scientific research, and then progress will grind to a halt. In fact, Chad Jones, one of the authors on the abovementioned paper, has already done the math on this. In a pair of papers in 1995 and 1999, he showed a simple model of what happens to growth if ideas get harder to find over time. The key result of this simple model (on page 3 of the second paper) is that constant productivity growth is proportional to the rate of population growth. If ideas get harder to find over time, productivity will grow more slowly than population in the long run. But even if ideas get easier to find as you get more of them, productivity can’t grow in these models if population stagnates.
So what happens if population actually declines? In a third paper, Jones shows that uninterrupted population decline causes living standards to stagnate as the planet empties out —basically the “Children of Men” scenario. With global fertility rates falling, that’s definitely a worry. And since it’s not clear that our available resources can even support indefinite exponential population growth, we may simply be damned-if-we-do, damned-if-we-don’t; we might have no choice but to accept a long-term stagnation in living standards.
(The only salvation in this case might be A.I. researchers; you can imagine a Jones-type model with high labor-capital substitutability in the research production function, where it’s possible to keep building machines that invent even better machines. I’m sure someone has worked out this model, and I just have to hunt around for it. Update: Here is a paper that does this!)
The idea of increasingly expensive research, unlike the other stagnationist arguments I addressed in earlier posts, is very hard to rebut — the theory is simple and powerful and the data is comprehensive and clear. But there are a few caveats to note here.
“More expensive” doesn’t mean “slowing down”
Even if science is getting more expensive, that doesn’t mean it’s actually slowing down; as Bloom et al.’s graph shows, we’re continuously throwing more money at research. But in a 2018 article in The Atlantic, Patrick Collison and Michael Nielsen made a stronger argument than Bloom et al. — they argued that the rate of important scientific discoveries is actually slowing down. (Update: Patrick clarifies that they didn't want to make quite so strong an argument as I make it out to be!)
Now, both Patrick and Michael are brilliant guys (you can read Patrick’s Noahpinion interview here!), and of course I agree with their overall concern. But I think they might have overstated their case a bit here. First of all, their main piece of quantitative evidence is to survey physicists about the importance of Nobel-winning discoveries from each decade:
But this is only to be expected, because important discoveries become more important as they age. Quantum mechanics was certainly cool stuff back in the 1920s, but it wasn’t until later that things like quantum field theory built on it, or engineering applications were developed that made use of it.
Science is progressive like that; each discovery is supported by the discoveries that came before it (in Newton’s words, it “stands on the shoulders of giants”). Thus, each new discovery makes the older discoveries that support it that much more important. So we’d expect to see more important discoveries in the distant past than the recent past; this is not, by itself, a sign of stagnation.
Patrick and Michael also point out that in recent years, there have been few Nobels given to research done in the 90s and 00s:
[T]he paucity of prizes since 1990 is itself suggestive. The 1990s and 2000s have the dubious distinction of being the decades over which the [Physics] Nobel Committee has most strongly preferred to skip, and instead award prizes for earlier work…
As in physics, the 1990s and 2000s are omitted [for biology and chemistry], because the Nobel Committee has strongly preferred earlier work: Fewer prizes were awarded for work done in the 1990s and 2000s than over any similar window in earlier decades.
But this is actually consistent with the narrative of an acceleration of Nobel-worthy discoveries in the 50s through the 80s. Remember that the Nobel Prize is rate-limited — it can only be given to a maximum of three people each year, and it has to be given while the researcher is still alive. So if there’s an acceleration of Nobel-worthy discoveries, that will create a backlog — a bunch of people who deserve Nobels waiting longer and longer in the queue, and a committee handing prizes to older and older researchers in order to honor them before they die. I’m not saying there was an acceleration of breakthrough science in the mid-20th century, but that would definitely be consistent with Nobels going to older and older discoveries.
So while I think it’s possible that science is actually slowing down despite our best efforts to sustain it, I don’t think we’ve seen evidence for that yet. Remember that “slowing down” is very different from “becoming more expensive to sustain”.
Worlds enough, and time
The upshot here is that even if science is getting more expensive, we can still afford to spend more resources and sustain it for a while. There’s both theory and evidence to support this proposition, both of which you can find over at Matt Clancy’s excellent blog.
First, some theory. In a very readable post, Clancy explains a recent theory paper by Benjamin F. Jones and Larry Summers. Jones and Summers make a simple model of the economy in which R&D spending drives growth, plug recent numbers for our real economy into that model, and then ask how much it would reduce growth if we were to stop spending on R&D. The answer is: A lot.
[O]n average the return on a dollar spent on R&D is equal to the long-run average growth rate, divided by the share of GDP spent on R&D and the interest rate. With g = 0.018, s = 0.025, and r = 0.05, this gives us a benefits to cost ratio of 14.4. Every dollar spent on R&D gets transformed into $14.40!
If we look more critically at the assumptions that went into generating this number, we can get different benefit-cost ratios. But the core result of Jones and Summers is not any exact number. It’s that whatever number you believe is most accurate, it’s much more than 1. R&D is a reliable money-printing machine: you put in a dollar, and you get back out much more than a dollar.
Clancy walks us through Jones and Summers’ response to various objections, in which they add more realism to their simple model — economic growth that happens for reasons other than R&D spending, the time it takes for research discoveries to be embodied in actual technology, other costs and benefits of research besides economic growth, and so on. With any and all of these modifications, the benefits of R&D spending always exceeds the cost by a long shot.
In other words, if this model is to be believed, then spending on R&D still gets a lot of bang for the buck in our current economy — we are not near Charles Jones’ imagined future dystopia in which all the low-hanging fruit of science has been picked and innovation becomes too expensive to sustain.
But a model is just a model, right? We want evidence. And in a second post, Clancy links us to a bit of evidence that R&D spending still drives progress forward, explaining the results of several papers that find that government research grants to small businesses were very effective at creating patentable inventions in both the U.S. and the EU.
Now, patents aren’t the same as dollars of GDP, and we will need more research to nail down the total economic benefits of research spending. But in terms of actual policy, we could at least spend as much of our GDP as we used to! As the Information Technology and Innovation Foundation reports, U.S. business has done its part, but federal government R&D funding has really fallen off:
In other words, business is running fast enough to stay in place in terms of R&D, but the government isn’t even doing that much. That’s why a big expansion in federal research funding (which until recently it looked like we were getting) is so important.
Anyway, the upshot of all this is that while the increasing cost of science is a real and significant concern, it doesn’t mean it’s all stagnation from here on out. We still probably have enough money to accelerate technological progress once again, if we’re willing to spend it.
In addition to federal research funds stagnating, the administrative overhead of doing science is also increasing significantly. A standard NSF proposal has about 15 pages of science + 100 pages of mandated documents, including conflict of interest spreadsheets, facilities lists, data management plans, postdoc mentoring plans, and so on. Every proposal has to include all this information even though less than 20% get funded. It can take at least 6 months and often over a year to get a decision on a proposal, so one has to be sending them out fairly frequently to keep up a funding stream. There is also a greater emphasis on partnerships and big collaborations to get research funding, but large collaborations have a much greater communication overhead. There are more administrative mandates that devour scientists’ time with limited positive feedback coming out of it.
The metric for how many ideas we're finding, for example, counting Nobel Prizes, is hopelessly squishy. Some are given to scientists who have made many contributions, but the committee has to select just one (Einstein for the photoelectric effect -- they picked the work easiest to describe). And the impact of, say, the first observation of slightly higher superconducting temperatures, is many orders of magnitude less the identification of the structure of DNA or the observation of gravity waves.
The "Moore's Law" analyses of the exponential rate of growth of the capability of some particular technology are indeed the result overlapping curves in which each advance in the technology saturates. And the overall timescale is really the period of a cycle which passes through innovation, development of manufacturing capability, development of a market for the products, all feeding back to stimulate new innovation. Take batteries. If you plot watt-hours per kg from 1900 (Volta) to 2000 (Li-Ion), you find a doubling time of about ten years. That takes us from telegraph repeaters through flashlights to hand held power tools and leaf blowers. Opening of markets for cellphones and electric vehicles has almost certainly changed the slope of the curve by increasing the market pull, and introducing production facilities so expensive that much more R&D falls out of the investments, and the time permitted for recovering the cost has to shorten.
So I think that asking about the rate of science without looking for the obstacles to complete the cycle of adoption may miss the actual limiting steps.