63 Comments

I think people just have an overly rosy view of the past. In academia as in all things, there's a seemingly irresistible temptation to compare the most lastingly important work from 100 years ago--i.e. the only work we are even aware ever existed--to the entire mass of random crap being generated today, and conclude that things were better 100 years ago. There was tons of random crap 100 years ago, but most of it has been thoroughly forgotten; and obviously, none of the random crap we are generating today has lasted 100 years...yet. And it'd be nice if we could predict which bits of it will last, but we couldn't have done that 100 years ago and we can't do it now, and that's all there is to it.

And I can promise, math is absolutely not stagnating. Math is doing just great. Digging up one person worrying otherwise, out of all the mathematicians in the world, means nothing.

Expand full comment

An additional problem you don't mention I think is that, speaking form personal experience, science these days is mostly structured around one permanent group leader who spends all their time securing funding/teaching/administrating, while the actual research is carried out by a series of young PhDs/Postdocs who are rapidly cycled out of science and replaced with a new young cohort of people. As science becomes more complex and subtle it requires longer and longer time to get to the forefront of the field to really understand what the next big advance or step should be and then a sustained period of uninterrupted focussed work to make that big step. There is no one left in the system to do that kind of work. Just group leaders who are too busy and young researchers who are still learning/focussed on incremental advances to establish themselves.

Expand full comment

One thing to note, at least in the US, are the very different ways the various funding agencies conduct proposal review. Proposals to the NSF tend to be reviewed by committee, which itself tends to generate a bias toward conservative and incremental work. Whereas, proposals to the DoD need to satisfy the interests of a much smaller subset of program managers. Since the program managers' success is tied not to any given proposal, but to the success of any program in their portfolio, they tend to favor the new, shiny, and risky proposals. In practice, most academics I know tend to prefer to send their more high-risk and novel proposals to friendly program managers in the various defense organizations, while sending their proposals with a stronger track record to the NSF.

Expand full comment

As someone who mastered out of a top American PhD program in analytic philosophy, I can say that the issues have as much to do with the state of higher education economics as they do with progress on substantial issues in the field (whether there actually are substantial issues is actually the subject of really interesting debate in philosophy, sic. the literature on quietism).

One area Noah didn't cover but that's equally troubling and caused a lot of stir a few years ago is the replication crisis in the social and psychological sciences. Apparently robust, textbook results systematically failing to replicate does an awful lot to undermine the credibility of a field. Publication bias discourages people from pursuing work that can be really path-breaking because the incentives only favor positive findings, so people tend to study things where they know they'll find a result.

And on top of all this, we've got Noah's recent post on the technological progress slump. There doesn't seem to be as much low-hanging fruit out there, and there's a lot more people out there looking for it.

Expand full comment

What about CRISPR gene editing? Is this not an example of a rich new vein to mine? Its technology and uses seem to be advancing quite rapidly. And the debate over its uses -- somatic gene editing versus embryonic gene editing -- suggest that this vein is splitting into new, competitive pathways which will accelerate discovery and application.

Isaacson's Code Breaker tells this story well -- and I think it contradicts the "one funeral at a time" theory, in that the competition between scientists (Jennifer Doudna and Feng Zhang) was very much a living and generative dynamic.

Expand full comment

I think there's a few different aspects being suggested by this post. One that I hadn't heard discussed before is the mismatch between how quantity of researchers is determined (i.e., teaching loads) and how quantity of researchers is best utilized (i.e., productivity of recent research directions in the field).

Some of the rest of the discussion though seems close to what I think of as naive readings of Kuhn, valorizing "revolutionary science" and undervaluing "normal science". It really is important to fund large collections of people working on the same topics that are recent trends so that there can be a lot of cumulative work.

But I think one problematic factor that pulls too far in this direction, that you didn't include, is the extreme selectiveness of things like major grant agencies and top department hiring. When there are more amply qualified individuals and projects than one can fund, the committee ends up bogging down in digging through details to try to disqualify candidates. The disqualifications at this stage of the process (as opposed to disqualifications in the early stages of the process) probably on net make the pool more conservative and biased towards existing trends. I'm constantly pushing everyone that will listen to make the final stages of decisions involve randomization, both to avoid this bias, and to save time of the committee members, who often spend many hours on these last few arguments.

Expand full comment

Nice one. I think you have to separate the humanities/social sciences from sciences. There’s ultimately a finite amount of knowledge to generate about Shakespeare or ancient Rome (and note I’m a liberal arts person who loves these subjects). A lot of these subjects feel more like scholasticism now, just debating someone else’s theory.

Social sciences are also hitting methodological limitations. The idea of just doing an A/B group and seeing if p<.05 is turning out to be a pretty useless way of studying the human brain with often contradictory results.

Expand full comment

"Encouraging not just novel research, but research in <i>novel directions</i>. Asking questions no one has asked before. Using methodologies no one has tried before."

Umm, err... This is not a new or particularly big idea. Back, I believe, in 1969, Jean Piaget gave an address at Johns Hopkins where he said that if you want to be creative in a field, study all the adjacent fields. We've known this for a long time. Doing something about it is another matter.

As far as I can tell, the university system was never really designed to generate novelty. Oh, sure, novelty is highly regarded, novelty in the past. The intellectual heros are the one's who come up with novel ideas. But the system itself doesn't know how to do it. Simply saying, "seek out novelty" is not helpful. If you really think you are being helpful, then, with all due respect, you are part of the problem, not part of the solution.

Fast grants, that's another matter. It's important, but given the will and the means, it's easy to do. We need more of it. But they're working within well-explored intellectual territory. How do you get out in the "here be dragons" region and stay there?

Oh, I do like your mining analogy. I think it's apt. Ore's are not regularly distributed in physical space and our methods of predicting where they might be are crude. Same with ideas in intellectual space. Check out a working paper where I build on a model Paul Romer proposed in 1992, in particular, see the section, "Stagnation, Redux: Like diamonds, good ideas are not evenly distributed", https://www.academia.edu/43938531/What_economic_growth_and_statistical_semantics_tell_us_about_the_structure_of_the_world

Expand full comment

Two Things:

- Really kicking myself in the ass for not reading Thomas Kuhn's The Structure of Scientific Revolutions.

- Interesting post/theory

Expand full comment

Maybe this is just the naivety of being an undergrad, but I feel like analytic philosophy is doing loads of interesting things (albeit they might be all wrong for Wittgenstein reasons I don't really understand.) We came up with a new plausible normative theory in the 1990s!! Wild. Parfit just created a completely new field of ethics with population axiology, there's currently pioneering work on decision making under normative uncertainty being done.

It's true no one seems to be making progress on the sorts of foundational issues around Russel's paradox and that sort of thing (although maybe I haven't been reading that sort of philosophy) but I for one like the switch to normative theory.

Expand full comment

There's a type of survivor bias. We look back at the golden years of the early 20th century and forget how much research back then was incremental. Boltzmann proposed quantization in 1877. Planck took it up in 1900, but modern relativistic QED was developed after World War II. Also, academics was structured a lot the way it is now. Research was done by faculty, and faculty were expected to teach as well as do research. This has been true since at least the 17th century, and it was definitely true in the late 19th century during the great industrial university boom.

One of the things about research is that one rarely knows what is going to be earth shaking. George Boole did his seminal work on boolean algebra in the early 19th century. BTW, he was a college professor. More than one research director has quipped that 95% of the stuff they fund is garbage, but that they have no way of identifying the good 5%.

Expand full comment

Specialization and the need to publish may be limiting factors in achieving broader contributions and idea-sharing. In the field of computer engineering, there's a bit split between professionals who freely write casual blog posts and academics who write detailed technical journal articles.

While I go through IEEE papers, the ones I read are viewed through my company's subscription. Without that, the cost of reading research is a bit prohibitive. Academics have had difficulty connecting with software industry at the same level as industry leaders.

Of course, just making papers free isn't a panacea either. IEEE has open access journals that anyone can read, but they are rarely shared as much as an industry leader's blog post on what they built over a weekend.

Improving the communication between academic fields, and crossing them more with industry, is an important step on cross-pollinating ideas which may improve innovation. However, it is easier said than done.

But this may also be in the context of engineering industries, where there is room for industry professionals who have relatively little academic experience.

Expand full comment

"No problem: just think of something nobody else has."

Expand full comment

I think there are some good reasons why academic funding (I'm thinking here of my field of study, part of basic science) is structured as it is. Primarily there is the issue of "selling" basic science to taxpayers. It is easy for people without a background in science to see much of basic science as waste. Just yesterday we see articles like this in the right-wing press: https://www.realclearpolicy.com/articles/2021/06/07/georgetown_received_7_million_in_federal_grants_for_new_space_alien_detection_techniques_780116.html

To a scientist this research appears perfectly supportable but to the scientifically challenged it is "The #WasteOfTheDay is presented by the forensic auditors at OpenTheBooks.com." Now this science can go forward because scientists and administrators at NASA recognize it as basic science that is worth doing and NASA as an institution can defend such work successfully as a reasonable (and small) part of their overall portfolio.

I think you might well have more new developments in science if we funded grad students and post-docs directly. Hopefully some students would take the opportunity to go for it (typically working with younger research faculty), while many might choose to work for established faculty and the presumed stability of future employment. But politically this could be a recipe for disaster. Lots of students and post-docs would produce nothing new or groundbreaking and the attacks on "welfare for the educated" would be huge and without the cover of the established science hierarchies likely a short lived experiment.

Expand full comment

I think a key idea to open up fresh veins of work is to emphasize interdisciplinary study and interdisciplinary research teams. In my own career I've seen that work to produce innovative ideas and lines of inquiry.

Expand full comment

Isn't it true that the very last people whose opinion one should seek on the value of ongoing research are the people presently carrying out that research?

Already more than two millennia ago, scholarly work in classics was being conducted in places like Athens, Pergamon, and Alexandria. The best minds and most attention were focused on Homer, of course. It wasn't until the 1920s, however, that Milman Parry (standing on so many shoulders that I suppose the assemblage would have mounted as high as the moon) managed to piece together the evidence that dramatically transformed how we had been approaching Homer's poetry. I like this quote from an article by John F. García on Parry: "It was [Parry] who rendered the contentions of the Analysts and Unitarians moot, for both sides were right in ways that neither had imagined."

Both sides were right in ways that neither had imagined.

There's careerism and CV-building yadda yadda. Then there's scholarly research. They have nothing to do with one another. The project to compile a complete dictionary on the Latin language up through the 7th century (Thesaurus Linguae Latinae) was initiated in 1894. They're thinking they'll be done in 2050. Maybe.

Expand full comment