63 Comments

I think people just have an overly rosy view of the past. In academia as in all things, there's a seemingly irresistible temptation to compare the most lastingly important work from 100 years ago--i.e. the only work we are even aware ever existed--to the entire mass of random crap being generated today, and conclude that things were better 100 years ago. There was tons of random crap 100 years ago, but most of it has been thoroughly forgotten; and obviously, none of the random crap we are generating today has lasted 100 years...yet. And it'd be nice if we could predict which bits of it will last, but we couldn't have done that 100 years ago and we can't do it now, and that's all there is to it.

And I can promise, math is absolutely not stagnating. Math is doing just great. Digging up one person worrying otherwise, out of all the mathematicians in the world, means nothing.

Expand full comment

Yep, it can be amazing to talk to older folks about their career paths. On the one hand there was much less competition and professionalization, so as is often pointed out in those ways it was easier. But then you learn the salaries or stipends they actually were paid and funding they received - they worked for peanuts, lived like paupers, etc., where junior folks today expect to be able to live comfortably on their own at similar points in their career. There's lots of reasons for these changes in expectations, but to complain about it as a new problem definitely relies on an overly rosy view of the past.

Expand full comment

Hear hear! Speaking for archaeology and medieval and ancient history, the average quality of new research is up tremendously over 100 years ago.

Of course one can only decipher Sumerian once, and it is marvelous and incredible how much detail some pre-computer academics somehow managed to analyze. There certainly were late 19th / early 20th century academics who would be outstanding even by today's standards. But the average was a goofball or fraud.

Also it's a special field: there's a practically endless amount of places to dig, and huge numbers of manuscripts still left to digitize and transcribe and collate and edit. And we're not exactly doing this to make the kinds of advances in material affluence that money is usually looking for, so the pace at which we are moving through just the known unknowns is hardly break-neck. There's definitely no "end of the discovery of history" in sight.

Expand full comment

An additional problem you don't mention I think is that, speaking form personal experience, science these days is mostly structured around one permanent group leader who spends all their time securing funding/teaching/administrating, while the actual research is carried out by a series of young PhDs/Postdocs who are rapidly cycled out of science and replaced with a new young cohort of people. As science becomes more complex and subtle it requires longer and longer time to get to the forefront of the field to really understand what the next big advance or step should be and then a sustained period of uninterrupted focussed work to make that big step. There is no one left in the system to do that kind of work. Just group leaders who are too busy and young researchers who are still learning/focussed on incremental advances to establish themselves.

Expand full comment

Like academia has being trying to solve the protein folding problem for decades including using deep learning and making no progress. And then deep mind comes along and does it almost straight away. Why? Because they could dedicate a team of professionals to it nearly exclusively for an extended period of time. Academia should be so ashamed of itself that it got so badly out done by the private sector on a problem that ris really quite far from any economic payout.

Expand full comment

That’s part of the story, for sure. DeepMind employees are just academics in a different environment where they can focus.

However, the advantage of DeepMind’s access to compute resources can’t be understated here. Not just in attacking the problem itself but in the free exploration and experimentation they enable. An academic could probably get access to enough compute to train AlphaFold, but never just to mess around with.

Expand full comment

Perhaps, but that is a related problem right? No individual academic would have enough time to mess around with all those resources productively and their postdocs/PhDs wouldn't have the deep expertise needed. But also its pretty routine for academics to get 10s of millions of CPU hours these days to burn. Is 128 TPUs or whatever really impossibly more powerful?

Expand full comment

They are very fast, but not in a different universe than GPUs, and surprisingly not that hard to use.

The issue is an academic might be able to come up with $10k or $50k to run a big, important experiment for a week on a bunch of TPUs. But they certainly can't blow $10k or $50k to try a dumb idea that will probably fail in interesting ways.

Expand full comment

Academics rarely pay for their time on supercomputers. Instead they get grants of millions of CPU hours they have to use by the end of the quarter or else they lose it in future so I think they routinely throw away a lot of computational resources at problems.

Expand full comment

Yes, but they only get so many computrons in their budget.

Expand full comment

Interesting anecdote, thanks. You might have heard the "truism" that as soon as a task is given from an individual to a group, efficiency drops by half. Actually I've heard it mainly from the business world. Which is hard to reconcile with all the evidence that the human species evolved through team hunting, or the amazing coordinating abilities of top sports teams. So me thinks the problems may be in the ways teams and incentives are organized.

Expand full comment

Deep Mind's protein folding built on the evolutionary analysis of protein structure, that is, exploiting the fact that the two (or more) pieces of a protein that physically fold together have to evolve together. That means a huge number of proteins had to be analyzed genetically and then, more tediously, be structurally analyzed to provide Google with a training set. Don't ignore all that incremental research that had to be done even before the Google team could get started. A lot of teams were building ML systems to exploit this. The Deep Mind team just had more experience tweaking this kind of system and access to more extensive resources.

Newton, at least, admitted that he stood on the shoulders of giants.

Expand full comment

Sure PDB and many ideas from academia were essential. But that doesn't change the fact that no govt funded research group was anywhere near in the same ballpark as what Deep Mind did. You say "Deep Mind team just had more experience", which is my exact point we should build govt. funded teams that have the same expertise and resources. DOE has some of the largest supercomputers in the world there's no reason we can't resource govt. funded research just as well.

Expand full comment

You are right. Google's private research has gotten ahead of publicly funded research here. The public sector will have to catch up. I'll leave the how to the folks who know how to do this. Do we need a new ML supercomputer center? Would a funding initiative suffice? Right now, so much of ML tuning is a dark art, but it is going to have to become a skill, like statistical analysis, that can be learned by and tends to work for anyone.

I know that Google's was not the only team working on the protein folding problem and taking a similar approach. Google has been keeping its ML tools out in the open. If nothing else, it's a way of getting people to buy Google computrons.

Expand full comment

Yes interesting interview with Sam Altman on Ezra Klein's podcast who said they tried to get public funding for OpenAI but there was no interest. So now they have to go the private route. Seems like a big mistake to me.

Expand full comment

I wouldn't say it's that clear-cut. The people DeepMind hired came from academia and many have PhDs and were already studying these subjects. DeepMind is able to provide many more resources than small college research program. At the same time, DeepMind never would've been able to achieve this without years of previous work from academics.

(And DeepMind's protein folding didn't happen "straight away" either. It took a long time with many iterative successes)

Expand full comment

Yeah Academia is great at training people but should aim for more than that, i.e., solving big hard problems. Or at least govt. funded research should aim for that. I mean many people thought the problem would take a life time to solve so on that scale going it was incredibly quick.

Expand full comment

There is a crucial technical issue about Deep Mind

The details of deep learning architectures were developed through the use of massive data. Certainly in the beginning google, fb, etc had several orders of magnitude more data and the computer power to explore it.

Even now, data may be more expensive than computing power

Expand full comment

The most important source of data by far for alphafold is the PDB which was developed over decades at great public expense and is open to all.

Expand full comment

One thing to note, at least in the US, are the very different ways the various funding agencies conduct proposal review. Proposals to the NSF tend to be reviewed by committee, which itself tends to generate a bias toward conservative and incremental work. Whereas, proposals to the DoD need to satisfy the interests of a much smaller subset of program managers. Since the program managers' success is tied not to any given proposal, but to the success of any program in their portfolio, they tend to favor the new, shiny, and risky proposals. In practice, most academics I know tend to prefer to send their more high-risk and novel proposals to friendly program managers in the various defense organizations, while sending their proposals with a stronger track record to the NSF.

Expand full comment

As someone who mastered out of a top American PhD program in analytic philosophy, I can say that the issues have as much to do with the state of higher education economics as they do with progress on substantial issues in the field (whether there actually are substantial issues is actually the subject of really interesting debate in philosophy, sic. the literature on quietism).

One area Noah didn't cover but that's equally troubling and caused a lot of stir a few years ago is the replication crisis in the social and psychological sciences. Apparently robust, textbook results systematically failing to replicate does an awful lot to undermine the credibility of a field. Publication bias discourages people from pursuing work that can be really path-breaking because the incentives only favor positive findings, so people tend to study things where they know they'll find a result.

And on top of all this, we've got Noah's recent post on the technological progress slump. There doesn't seem to be as much low-hanging fruit out there, and there's a lot more people out there looking for it.

Expand full comment

Andrew Gelman’s blog had a lot of crazy stuff on the replication crises (I’m guessing you already knew that though) ;)

Expand full comment

What about CRISPR gene editing? Is this not an example of a rich new vein to mine? Its technology and uses seem to be advancing quite rapidly. And the debate over its uses -- somatic gene editing versus embryonic gene editing -- suggest that this vein is splitting into new, competitive pathways which will accelerate discovery and application.

Isaacson's Code Breaker tells this story well -- and I think it contradicts the "one funeral at a time" theory, in that the competition between scientists (Jennifer Doudna and Feng Zhang) was very much a living and generative dynamic.

Expand full comment

Sorry, double commenting, but another example is the implementation of mRNA vaccine technology. In that case, it was regulations holding back the rollout of a new tech for use in human beings (and associated clinical and other research directions). It took an emergency use authorization under a bioterrorism law in order to get mRNA past the starting post. Prior to that, it was a "dead end" of research, because the tech was considered to have problems.

Expand full comment

Basic research grants using CRISPR were being funded pretty soon after it was developed, so I think this is a good point.

Expand full comment

One example doesn’t disprove an overall thesis.

Expand full comment

But the thesis posits new veins to be found and exploited. This is one example of a new vein, not a contradiction of the thesis.

Expand full comment

I think there's a few different aspects being suggested by this post. One that I hadn't heard discussed before is the mismatch between how quantity of researchers is determined (i.e., teaching loads) and how quantity of researchers is best utilized (i.e., productivity of recent research directions in the field).

Some of the rest of the discussion though seems close to what I think of as naive readings of Kuhn, valorizing "revolutionary science" and undervaluing "normal science". It really is important to fund large collections of people working on the same topics that are recent trends so that there can be a lot of cumulative work.

But I think one problematic factor that pulls too far in this direction, that you didn't include, is the extreme selectiveness of things like major grant agencies and top department hiring. When there are more amply qualified individuals and projects than one can fund, the committee ends up bogging down in digging through details to try to disqualify candidates. The disqualifications at this stage of the process (as opposed to disqualifications in the early stages of the process) probably on net make the pool more conservative and biased towards existing trends. I'm constantly pushing everyone that will listen to make the final stages of decisions involve randomization, both to avoid this bias, and to save time of the committee members, who often spend many hours on these last few arguments.

Expand full comment

Nice one. I think you have to separate the humanities/social sciences from sciences. There’s ultimately a finite amount of knowledge to generate about Shakespeare or ancient Rome (and note I’m a liberal arts person who loves these subjects). A lot of these subjects feel more like scholasticism now, just debating someone else’s theory.

Social sciences are also hitting methodological limitations. The idea of just doing an A/B group and seeing if p<.05 is turning out to be a pretty useless way of studying the human brain with often contradictory results.

Expand full comment

"Encouraging not just novel research, but research in <i>novel directions</i>. Asking questions no one has asked before. Using methodologies no one has tried before."

Umm, err... This is not a new or particularly big idea. Back, I believe, in 1969, Jean Piaget gave an address at Johns Hopkins where he said that if you want to be creative in a field, study all the adjacent fields. We've known this for a long time. Doing something about it is another matter.

As far as I can tell, the university system was never really designed to generate novelty. Oh, sure, novelty is highly regarded, novelty in the past. The intellectual heros are the one's who come up with novel ideas. But the system itself doesn't know how to do it. Simply saying, "seek out novelty" is not helpful. If you really think you are being helpful, then, with all due respect, you are part of the problem, not part of the solution.

Fast grants, that's another matter. It's important, but given the will and the means, it's easy to do. We need more of it. But they're working within well-explored intellectual territory. How do you get out in the "here be dragons" region and stay there?

Oh, I do like your mining analogy. I think it's apt. Ore's are not regularly distributed in physical space and our methods of predicting where they might be are crude. Same with ideas in intellectual space. Check out a working paper where I build on a model Paul Romer proposed in 1992, in particular, see the section, "Stagnation, Redux: Like diamonds, good ideas are not evenly distributed", https://www.academia.edu/43938531/What_economic_growth_and_statistical_semantics_tell_us_about_the_structure_of_the_world

Expand full comment

Two Things:

- Really kicking myself in the ass for not reading Thomas Kuhn's The Structure of Scientific Revolutions.

- Interesting post/theory

Expand full comment

Maybe this is just the naivety of being an undergrad, but I feel like analytic philosophy is doing loads of interesting things (albeit they might be all wrong for Wittgenstein reasons I don't really understand.) We came up with a new plausible normative theory in the 1990s!! Wild. Parfit just created a completely new field of ethics with population axiology, there's currently pioneering work on decision making under normative uncertainty being done.

It's true no one seems to be making progress on the sorts of foundational issues around Russel's paradox and that sort of thing (although maybe I haven't been reading that sort of philosophy) but I for one like the switch to normative theory.

Expand full comment

There's a type of survivor bias. We look back at the golden years of the early 20th century and forget how much research back then was incremental. Boltzmann proposed quantization in 1877. Planck took it up in 1900, but modern relativistic QED was developed after World War II. Also, academics was structured a lot the way it is now. Research was done by faculty, and faculty were expected to teach as well as do research. This has been true since at least the 17th century, and it was definitely true in the late 19th century during the great industrial university boom.

One of the things about research is that one rarely knows what is going to be earth shaking. George Boole did his seminal work on boolean algebra in the early 19th century. BTW, he was a college professor. More than one research director has quipped that 95% of the stuff they fund is garbage, but that they have no way of identifying the good 5%.

Expand full comment

Specialization and the need to publish may be limiting factors in achieving broader contributions and idea-sharing. In the field of computer engineering, there's a bit split between professionals who freely write casual blog posts and academics who write detailed technical journal articles.

While I go through IEEE papers, the ones I read are viewed through my company's subscription. Without that, the cost of reading research is a bit prohibitive. Academics have had difficulty connecting with software industry at the same level as industry leaders.

Of course, just making papers free isn't a panacea either. IEEE has open access journals that anyone can read, but they are rarely shared as much as an industry leader's blog post on what they built over a weekend.

Improving the communication between academic fields, and crossing them more with industry, is an important step on cross-pollinating ideas which may improve innovation. However, it is easier said than done.

But this may also be in the context of engineering industries, where there is room for industry professionals who have relatively little academic experience.

Expand full comment

"No problem: just think of something nobody else has."

Expand full comment

I think there are some good reasons why academic funding (I'm thinking here of my field of study, part of basic science) is structured as it is. Primarily there is the issue of "selling" basic science to taxpayers. It is easy for people without a background in science to see much of basic science as waste. Just yesterday we see articles like this in the right-wing press: https://www.realclearpolicy.com/articles/2021/06/07/georgetown_received_7_million_in_federal_grants_for_new_space_alien_detection_techniques_780116.html

To a scientist this research appears perfectly supportable but to the scientifically challenged it is "The #WasteOfTheDay is presented by the forensic auditors at OpenTheBooks.com." Now this science can go forward because scientists and administrators at NASA recognize it as basic science that is worth doing and NASA as an institution can defend such work successfully as a reasonable (and small) part of their overall portfolio.

I think you might well have more new developments in science if we funded grad students and post-docs directly. Hopefully some students would take the opportunity to go for it (typically working with younger research faculty), while many might choose to work for established faculty and the presumed stability of future employment. But politically this could be a recipe for disaster. Lots of students and post-docs would produce nothing new or groundbreaking and the attacks on "welfare for the educated" would be huge and without the cover of the established science hierarchies likely a short lived experiment.

Expand full comment

I think a key idea to open up fresh veins of work is to emphasize interdisciplinary study and interdisciplinary research teams. In my own career I've seen that work to produce innovative ideas and lines of inquiry.

Expand full comment

Isn't it true that the very last people whose opinion one should seek on the value of ongoing research are the people presently carrying out that research?

Already more than two millennia ago, scholarly work in classics was being conducted in places like Athens, Pergamon, and Alexandria. The best minds and most attention were focused on Homer, of course. It wasn't until the 1920s, however, that Milman Parry (standing on so many shoulders that I suppose the assemblage would have mounted as high as the moon) managed to piece together the evidence that dramatically transformed how we had been approaching Homer's poetry. I like this quote from an article by John F. García on Parry: "It was [Parry] who rendered the contentions of the Analysts and Unitarians moot, for both sides were right in ways that neither had imagined."

Both sides were right in ways that neither had imagined.

There's careerism and CV-building yadda yadda. Then there's scholarly research. They have nothing to do with one another. The project to compile a complete dictionary on the Latin language up through the 7th century (Thesaurus Linguae Latinae) was initiated in 1894. They're thinking they'll be done in 2050. Maybe.

Expand full comment

I would tell the story differently. There was reportedly a very large classical output of Homeric scholarship, but nearly all of it was lost, except little bits and bobs that survived in the form of margin notes on medieval codices. We know quite little of what classical scholars thought about the origins of the epics, though the contemporary popular opinion reflected in non-Homeric-scholars' writings seems to have been that the texts reached their final forms in early 6th century BC Athens with the tyrant's involvement.

Renaissance and modern scholarship began nearly from scratch. The main way they were influenced by the very little they could reconstruct of the classical tradition was its search for the correct text: it was clear that already by the late Hellenistic period the texts had diversified and scholars were aware of that and trying to deduce the original. The German analytical school of Homeric scholarship in a sense grew out of that search by extrapolating additional fancier theories of hybridization and so on.

Milman Parry was a shot out of the blue precisely because he didn't stand on anyone's shoulders. Instead he took a completely new, anthropological approach and found a living parallel. But would his view have been at all new or controversial to classical scholars? We don't know. I certainly would not presume that classical scholars weren't aware that the epic genre had its roots in illiterate oral tradition. I would assume the illiterate oral epic tradition was still alive and well in Greece and Italy during the Hellenistic and Roman periods, even if all the epics from those periods that have come down to us were composed in writing.

Expand full comment

Thanks for the reply. My earlier comment was an effort to offer a different perspective on what Noah terms novelty in academia. I chose Parry and Homeric scholarship precisely because in classics, Parry is often held up as genius who revolutionized an academic discipline or, I suppose, a subdiscipline. So we might rephrase Noah's question this way: Why do people feel like their academic fields are so exhausted that they are most unlikely to produce future Parrys, Darwins, Einsteins, et al.?

For non-classicists: Milman Parry (1902-1935) demonstrated conclusively that features of Homeric poetry that had for millennia been considered odd, anomalous, corrupted, and otherwise weird make perfect sense if the Iliad and Odyssey were not written by a single genius Greek poet, but instead built up over many centuries by successive generations of illiterate oral poets whom we call bards, that is, storytellers who were capable of extemporaneous composition of new lines of verse, new scenes, etc. within traditional songs about Achilles, Odysseus, and so on.

We can see the irony here, of course. Parry with his theory of the oral composition of the Homeric epics kicked Homer off his throne. (I overstate to make a point.) And yet we want to put Parry on a throne of his own.

Milman Parry's son Adam, also a classicist, said of his father that “each of the specific tenets which make up Parry’s view of Homer had been held by some former scholar.” For example, the German philologist F. A. Wolf (1759-1824) believed that the poems were composed orally in the tenth century B.C., in the form of short separate songs later combined. I suspect Milman Parry would himself downplay his originality, and I doubt that he would have a problem with the image of his standing on other classicists' shoulders.

I genuinely believe that the more a person knows about how a particular body of knowledge is advanced or developed, the more clearly he or she will see that so much of that advance takes place incrementally. Genius is extremely rare; novelty is in the eye of the beholder; people who are "visionary" often turn out to be geniuses of self-promotion.

Again, to revert to Noah's argument, I suspect that if researchers today are disheartened about the future of their disciplines, it may partly be a consequence of the fact that in 2021, we tend to have short attention spans, want instant gratification, and dislike the idea of spending an entire career mining a vein of ore that could maybe be petering out.

Expand full comment

You make good points.

I just happen to have spent a lot of my life on Homer, so it's hard not to jump in and make corrections. Also be careful with the myth of a continuous Western scholarly tradition from the classical period. The truth is there was a devastating collapse at the end of the classical period followed by a long dark age when even top scholars reverted to flat-earthism. Some of the classical scholarly tradition survived in the East, but in the Western tradition it was rediscovered and reconstructed after a very long gap. The lesson being that progress in science is not inevitable.

Parry can fairly be called a genius. Most important he was someone who was able to look at a very old puzzle with fresh eyes. I think he's a very good example of how sometimes the most important discoveries come from the research programs that are ignored by the mainstream. Parry wasn't recognized at all in his lifetime. It wasn't till decades after his death that Albert Lord started to win people over to Parry's theory. There's nothing new about the problem of academic conservatism.

I think it's somewhat easier today to get unconventional views published, but as difficult as ever to change academics' minds. The "one funeral at a time" theory of academic progress definitely has merit. I think the main reason for it is that well-known human feature of overvaluing sunk costs. As a rule of thumb, once an academic has spent 5 years arguing a position, only irrefutable disproof, and sometimes not even that, will change his or her mind.

There is still a diversity of views on Homer: whether he existed or was a mythical figure invoked by a later bard or bards; whether or not the same author composed both the Iliad and Odyssey; whether the epics in the form that has come down to us were composed in writing or orally.

I would say the middle road positions at this point are that the authors of the Iliad and Odyssey were bards experienced in oral composition-in-performance but who composed the epics we know in writing, and that the Iliad and Odyssey are mostly original stories each mainly by single authors. The Iliad is thought to have at least one late-added chapter, and the Odyssey's adventure-story chapter is thought to be largely traditional.

The Iliad is considered the most original because it is wedged into a very short time period within the traditional broad Trojan War epic cycle, and because there is no known early Greek art depicting any scenes from the Iliad (whereas scenes from the Odyssey and lost Trojan War epics such as the Kyprie are depicted very early).

The Odyssey's episodes with the Cyclops, the shape-shifter, journeying to the ends and under the world, and getting washed up naked on a beach and found by a beautiful female are all known to be traditional (the latter two going back to Iraqi epic). But the framing of that traditional adventure story within a much longer story about a trickster who is unshakably committed to home and son and wife and they to him seems to be the novel work of a single author.

Parry is still revered for showing that the Iliad and Odyssey are composed in a style obviously stemming from an illiterate oral tradition of composition-in-performance. But I think most experts today believe the Iliad and Odyssey represent a development from that traditions towards longer, more highly wrought literature enabled by writing. They are simply too long to have been composed in performance (typically oral epic doesn't last more than a couple hours - the Iliad and Odyssey are 8+ times that long). And ancient scribes didn't write quickly enough to record a live performance. I can imagine a scenario of an illiterate bard oral composer greatly slowing down his performance for a scribe, and using the opportunity to tell a much longer and more developed story than would be possible in live performance. But I think it's much simpler and easier to imagine that illiterate bards learned to write.

Cheers.

Expand full comment

I appreciate all of that. Yes, I'm perfectly aware that Homer was effectively forgotten in the Latin West (as was reading knowledge of ancient Greek, as you well know) until the era of Petrarch and Boccaccio, and certainly did not mean to imply otherwise with my standing-on-shoulders metaphor. For my part, I have no opinion on whether Iliad 10 was added later but I am sentimentally in favor of the Alexandrian view that the Odyssey originally ended at Book 23.297.

Expand full comment

Thanks Jim. I agree Iliad 10 could be original, and I'd also consider the possibility of a team of author-bards composing and performing long-form epic as a team, attributing their own work to Homer (would be pretty hard for one person to sing the whole Iliad, even breaking it up over days). On the Odyssey's ending, I agree with those who doubt the old interpretation that the Alexandrian scholars meant to say it ended there.

Expand full comment