In addition to federal research funds stagnating, the administrative overhead of doing science is also increasing significantly. A standard NSF proposal has about 15 pages of science + 100 pages of mandated documents, including conflict of interest spreadsheets, facilities lists, data management plans, postdoc mentoring plans, and so on. Every proposal has to include all this information even though less than 20% get funded. It can take at least 6 months and often over a year to get a decision on a proposal, so one has to be sending them out fairly frequently to keep up a funding stream. There is also a greater emphasis on partnerships and big collaborations to get research funding, but large collaborations have a much greater communication overhead. There are more administrative mandates that devour scientists’ time with limited positive feedback coming out of it.
The metric for how many ideas we're finding, for example, counting Nobel Prizes, is hopelessly squishy. Some are given to scientists who have made many contributions, but the committee has to select just one (Einstein for the photoelectric effect -- they picked the work easiest to describe). And the impact of, say, the first observation of slightly higher superconducting temperatures, is many orders of magnitude less the identification of the structure of DNA or the observation of gravity waves.
The "Moore's Law" analyses of the exponential rate of growth of the capability of some particular technology are indeed the result overlapping curves in which each advance in the technology saturates. And the overall timescale is really the period of a cycle which passes through innovation, development of manufacturing capability, development of a market for the products, all feeding back to stimulate new innovation. Take batteries. If you plot watt-hours per kg from 1900 (Volta) to 2000 (Li-Ion), you find a doubling time of about ten years. That takes us from telegraph repeaters through flashlights to hand held power tools and leaf blowers. Opening of markets for cellphones and electric vehicles has almost certainly changed the slope of the curve by increasing the market pull, and introducing production facilities so expensive that much more R&D falls out of the investments, and the time permitted for recovering the cost has to shorten.
So I think that asking about the rate of science without looking for the obstacles to complete the cycle of adoption may miss the actual limiting steps.
I remember reading that the Nobel was going to Einstein for relativity, but Henri Bergson, an influential philosopher at the time, said that relativity didn't show that time slowed down, but that only clocks did, and somehow managed to convince the Nobel committee to switch Einstein's Nobel off relativity.
Big evidence on the techno-optimism side. Some of the most serious people in the business of forecasting for utilities think that it's actually _cheaper_, over the next fifty years, to de-carbonize the economy with a mass rollout of distributed energy, than to continue "business as usual" with maintaining and expanding the transmission grid. And that's completely leaving aside the health / environment concerns. ("What exactly is the business case for immolating your customers?")
Once you have mass-scale solar and storage, you can systematically shave down the peak level of load -- the worst case scenario that you hit maybe two or three days a year. The _whole grid_ has to be built so it won't fall apart under that peak load. Knocking that down by 10-20% _drastically_ lowers the cost of the transmission grid and distribution lines. That cost savings _way_ more than offsets the cost of the distributed resource investments. Like, by at least hundreds of billions of dollars.
And note that this argument is not vulnerable to the "but winter!" critiques from anti-renewables cranks. You don't actually need a ton of energy to do this kind of peak shaving -- we're talking about only offsetting like 3-5% of your daily energy use, just, the 3-5% that happens to contribute toward peak demand. Your rooftop solar can cover that on any day of the year. Maybe not in an Alaskan winter, but whatever, it works for the land where 90% of the US population lives, and for a lot of the rest, community-level wind + storage probably works too.
Imho, there are three aspects. More people (researchers?) are actually involved in duplication efforts rather than adding to NEW knowledge pool/outcomes. The Wikipedia model of crowd collaboration is mostly crowd duplication.
The second point is about seeding of NEW S- curves. Yes, whatever the discovery, it will have a plateau in future. The history of science shows sharp growth (middle part of S curve) when more seeding (new ideas) happens. My fear, may be controversial, is whether over dependence on mechanization is slowing down seeding, after all a new idea first comes from Thought Experiments. Photo Electric Effect was discovered by Einstein first through Thought Experiments. Right? Seeking CERN proof of new particles at bottom part of S won't help us; the best example being Higgs Boson.
The third point a little more controversial, actually I don't know much, but just wonder. Whether intelligence/creativity of homo sapiens at top of the pyramid is slowing down (S curve ???)? Let's accept one truth. The growth of science has always been directly proportional to a handful of path finders. Whether Galileo, Newton, Darwin or Einstein, only a few super geniuses seeded the later explosive growth. The situation is unlike business leaders; it's more akin to super geniuses in fine arts like painting, music and dance. Though number of dancers have increased exponentially, only a few new dance forms have been created. I haven't heard any painter better than Picasso. Has mechanization made our mind less innovative/IQ? If so, what effects AI & Robotics will have on future growth of science?
Actually, now that I think of it, the view that thought experiments come before empirical experiments is very wrong, at least for all the most famous findings in physics.
Newton's work on mechanics, gravity and calculus obviously came from observations how things move in real life.
Galileo got famous for being the first person to use telescopes for watching the sky, and many of his smaller findings were what lead people to believe that the copernican heliocentrism was actual physical reality. And Copernicus got his model from observing the paths of the planets !
Einstein was pretty much a pure theoretical physicist, but he heavily relied on previous experiments as a seed for his ideas. Special relativity came from the results of the Michelson–Morley experiment, and his work on the photoelectric effect came from the results of Lenard's experiments.
Similarly, Plancks work started with his attempts to find a theory that could explain the findings of black body radiation experiments. The Bohr atomic model was an attempt to explain why the Rutherford model, which was build upon the findings of scattering experiments, is stable.
Darwin got his evolutionary model from his observations during his travels. Similarly, Mendel got his model of heritability from careful experimentation and observation with peas in his garden.
If we rely on thought experiments only, without empirical observations and experimentation, we are not doing science, we are doing philosophy. And not good, modern philosophy that increases our knowledge, but old, rusty philosophy that already had been done to death by the Greeks.
Similarly, all science is done on the shoulder of giants. Newton's mechanical findings were predated by gallileos experiments. Einstein's mathematical framework of SR was predated by Lorentz and Maxwell's theoretical work. Newcomes steam engine could never have been invented without dozens of previous tries by Heron, Savery etc.pp., as well as the contributions of thousands of people over hundreds of years that finally brought metallurgy to a level that allowed pressures high enough.
Also, it's no accident that almost the entirety of modern physics was invented in central Europe. And that isn't because speaking German or being Jewish gives you superpowers, it's because all these people knew each other, or at least of each other, and were in contact with each other. They Crowdsourced the advancement of physics, one individual paper at a time, building new experiments to check previous theories and making new theories to explain the results previous experiments. That's not duplication, that's how science works.
Searching for the existence of new fundamental particles is not any less important than trying to measure the exact speed of äther winds. Nobody expected that that experiment would result in nuclear power and GPS satellites.
> Photo Electric Effect was discovered by Einstein first through Thought Experiments.
Just to be pendantic, but that the energy of the electrons in the photoelectric effect depends on the wavelength of the light (as opposed to the intensity, which was the prediction of classical theories) was first shown by Lenard in 1902. Einstein's great contribution was to connect this result with Plancks work on black body radiation, and thereby providing evidence that quantum mechanics are not just a weird math trick thats useful in a specific situation, but a physical reality that is applicable to a lot of (sub-)field.
Fun fact: Leonard got his Nobel price in 1905, the same year Einstein published his photoelectric effect work that would get him his price !
Not so fun fact: Leonard was a huge Nazi and one of the main forces behind the concept (and rejection) of "Jewish Physics".
On one hand the "Great Stagnation" has never been more than a myth: technology has continued to advance rapidly throughout our lifetimes. The real underlying complaint is that since the 1980s first-worlders have seen investment and growth shift to other parts of the world.
On the other hand we have made very little progress on truly renewable energy, and we are right now in the midst of a dire collective-action-failure crisis with our still growing fossil fuel consumption and greenhouse gas emissions and no serious plan to stop that growth. Last summer's wildfires should have made clear to everyone that we have already exhausted Earth's ability to sustain our behavior without violent reactions.
It's not only techno-optimists who seem to want to wish away the very serious problems of limited energy supplies and side-effects of consumption. Most sci-fi assumes that energy in the future will somehow become free and unlimited, even if in pessimistic sci-fi it's typically hoarded by bad guys. For example here's a beautifully grim new vid mixing animation and live action to imagine a Bladerunnerish decaying city in which good guys eke out sustenance from urban farming while bad guys hover around in hovercrafts. https://www.youtube.com/watch?v=LsGZ_2RuJ2A
One area where you definitely do need more rigor is to support your statements on solar power and batteries. Your evidence seemed to consist only of lower panel prices relative to the past and the growing quantity of installed panels. But solar is still very expensive relative to other power sources, and the growth is entirely driven by state support not markets. Producing solar panels is energy intensive, and much of the falling costs owes to Chinese state support through megadams, or cheap brown coal. Do the numbers: compare the energy costs of production to the expected energy output over the panel's lifetime, depending on where it's located. The picture is still quite grim. The solar industry still consumes far more energy than it produces. Solar subsidies (including requirements that distributors purchase solar power at regulated prices) make the problem worse, mainly by driving installations in gray-sky areas to which solar is especially poorly suited.
Batteries are of course also energy intensive products: if you compare solar alone to solar+ battery, solar + battery shifts the time of power output to when its needed, but at a very high price in terms of the system's ratio of energy output to energy consumption. In terms of minimizing greenhouse gas output, gas turbines alone beat solar + gas turbines, which beat solar + batteries (and solar alone is not an option).
>>>On one hand the "Great Stagnation" has never been more than a myth: technology has continued to advance rapidly throughout our lifetimes. <<<
It might be a myth, but there's at least some evidence to support it (mostly in the form of a slow-down in the growth of productivity). About 10-12 years ago, there was quite a bit of discussion here and there prompted by the popularity of the cable series Mad Men (don't know if you've seen it; it's great IMO). Anyway, the show depicted people in the 1960s (the series begins in the spring of 1960, as Nixon and Kennedy are getting ready to square off). A lot of people made the pretty plausible observation that the lifestyles of middle class Americans had not changed nearly as much in the ensuing 50 years (1960-2010) as they had in the preceding years (1910-1960). People in 1960 in many regards had lifestyles and enjoyed a level of technology highly similar to people a half century later. They flew on planes. They lived in single family homes. They got up. They took a hot shower. They got something out of the fridge. They made coffee. They used a toaster. They got in a car. They fought traffic. They drove to an office. They rode an elevator. They used phones. They watched TV. They listened to the radio. They went to the cinema. They shopped at supermarkets. They heated their houses with fossil fuels. They ate pizza. They wore jeans. And sometimes dresses or suits. They lived into their 70s. And so on. Pretty recognizably "modern!"
Contrast that with the change between 1910 and 1960...
Take a walk through a well preserved Roman house. Of course the technology will be missing, but you'll be surprised how familiar it looks and feels. You might remember Don's line from that series that Americans enjoy a lifestyle that in the rest of the world only kings have. And If you're in Beijing you should know that all of that technology that the US middle class had in the '60s has arrived in China much more recently. Which gets to my point about growth shifting elsewhere feeling to first-worlders like stagnation. Their lifestyles have nevertheless changed very much with computers, cell phones, the internet. And all that tech we've had since the '60s or longer has become much cheaper and/or better. Even if these days driving a Cadillac doesn't make you feel like the king of the road.
Basically all actual scientific work these days is done by people who are ostensibly trainees -- graduate students and postdocs. Professors only have time to write grants and manage.
Since the system is set up this way, that means if you want to pump money into scientific research, that money has to be used to hire graduate students and postdocs to do the work. This means, for instance, that you can't really get one person to work on a project for more than a couple years, as they eventually have to either graduate or leave their postdoc position. And since if a project doesn't pan out, it may mean a student doesn't get their degree, or a postdoc has to leave their career forever, there's a big incentive to try to "hit singles".
This is, a priori, a very strange way of doing things. I wonder how much this has changed over the course of the 20th and 21st centuries, what kinds of effects it has had, whether other countries (Japan?) don't use trainees to do all work, and what would happen if we could set up institutions to fund permanent "individual contributor" roles for experienced scientists to do hands-on research.
(Incidentally, this structure seems comparable to certain other fields which also require a lot of education and are prestige-obsessed. Consulting, big law, and investment banking are all structured similarly. The actual work is done by a lot of student interns and overworked junior employees. After a few years you either get promoted into management or leave. I wonder whether this is coincidence or not...)
Totally agree, its a crazy way to structure the system. It is designed that way I imagine because of pressure from senior academics who benefit most from having a strong hierarchy where a lot of people are subservient to them.
The metrics for measuring costs are a bit off when it comes to the historical side.
When Galileo was doing his thing, books were still SUPER expensive - like a car, or large appliance. They cost several months' worth of a day-laborer's wages, and only the rich could afford them. The precision optics for his telescopes were cutting-edge, and probably cost a pretty penny as well - AND he had to build the things himself, because he'd only just recently read about them being invented elsewhere. Same goes for the precision glass-blowing necessary to build his "thermoscope", and the precision metalworking for all those pendulums he studied. It's not like everyone just had a clock.
When Newton was writing Principia, books were somewhat cheaper, but he still would have had to fund his personal library all on his own. And his research was an idle vanity project, a distraction from productive efforts like managing his affairs.
Science was an elite endeavour back then because only elites could afford it. Pisans were probably HORRIFIED at Galileo's wasting a whole damned sack of potatoes by tossing them off the tower.
Anyways, I'd love to see an inflation-adjusted per-capita of actual science spending going as far back as possible. I'd suspect that it's increased only relatively gradually over time. The headline numbers are probably just only because we've got more scientists.
Finally, I'd also point out that "we've got more scientists" and "the low-hanging fruit has all been picked" means we've got more avenues to pursue. Two hundred years ago, Smith was lucky to be able to piece together supply and demand from early private and public data. Today, economists can spend a whole career studying the price elasticity of a single good, or collecting better labor statistics, or working out game theory. Two hundred years ago, early doctors had to rob graves. Fifty years later, they were still working out germ theory. Fifty years after that, penicillin had yet to be invented. Today, a researcher can spend a whole career studying a single antibiotic, or hunting for new ones, or teasing out the genetic complexities of an antibiotic-resistant bacterium.
If progress is slowing, it's because the amazing progress we've made has opened up entire universes of new questions, not just that those questions are harder. My Master's thesis was pretty simple - "low-hanging", if you will. I just built a device that would help a patient keep their foot muscle steadily flexed during a minutes-long MRI scan. It wasn't even that great at its job - the stupid pneumatics I used kept losing pressure under the flexing. But there's no reason to build it unless you have an MRI machine, and you know what diabetes is as a disease, and you know that diabetic neuropathy (of the foot) is a symptom that you even could or would want to diagnose and manage. The point is, every new scientific discovery poses a hundred new questions. Even with a vastly more-educated and larger population than ever, I think we're just kind of losing the battle with the exponential nature of available things to research.
Think about how many scientists in previously underdeveloped countries we can now discover. Think of all the women left out of science in the 1920s. Think of all the high school students doing genetic assays for science fair. The underlying model may be correct, but there's huge untapped potential out there.
The other problem I have with the conclusion that science is slowing down is that maybe science just isn’t very parallelizable. So throwing more people at one problem ie building smaller chips. Doesn’t speed things up much as they all just end up working on similar problems as each other and reproducing each other’s work. So of course throwing more people at one problem will make it appear like diminishing returns. But if you might actually get a very similar growth curve with less people just because it takes a certain amount of time to learn what the problems are and come up with a way to fix it and having 10 people do that independently doesn’t speed it up. In computer science most problems take a lot of effort to make them parallelize well. It’s not clear that we have put that effort in at all with scientific discovery. Particularly something like chip manufacturing where it is just many companies competing with each other and not communicating
Is there any research that explains where federal R&D spending is going instead? I imagine a lot of it's going to entitlements. In the 2000s a lot went into wars, though I suppose the sequester curtailed some of that.
I'm in over my head, but that has never stopped me before.
Thought experiment: put yourself in the place of the person who, two or three millennia ago, dreamed up the mechanical helix, the waterwheel, mechanisms powered by steam, an air pump, water organ, a chain drive (in a repeating crossbow), the gimbal, etc. The guy was sure he'd become a household name and get as rich as Croesus, but ... nope.
Humanity didn't get a true steam engine until the 18th century. The entire Roman empire might have been chock-full of watermills but wasn't. So my question, with the caveat that this is the first that I've become aware of this mega-debate on stagnation, is this: how much does it matter?
That is, IF social and cultural forces (which I take to have been the reasons why the Greeks and Romans were so much better at generating ideas than capitalizing on and implementing them) are actually far more powerful and influential than a great insight or clever invention, are people asking the right questions in this stagnation debate?
It's not THAT hard to imagine a point in the future when only a significant minority of people will be willing to get vaccinated against a virus like Covid-19. Social and cultural forces! Maybe the danger is not so much that we're running out of ideas or that science is getting too expensive and cumbersome to do well. Maybe it's that everything NOT-science is failing to keep up, even regressing in some areas. The surge of religious fundamentalism in the 20th century is a prime example, but not the only one.
Running out of good ideas? No problem. Let's actually try teaching evolution in Tennessee or Arkansas high schools and see what that does for us.
One aspect of rising cost that I have always wondered about is whether getting each generation of new scientists to the forefront of the field gets progressively more expsensive over time? We are not only standing on the sholders of giants but on the shoulders of giants themselve standaing on the shoulders of giants, etc. It takes a lot of giant climbining to reach the frontier. Educational technology may then be important in improving productivity of science back to historical trend.
Perhaps a simpler question to answer: is mathematics slowing down? I don't see any particular sign of that. Mathematics is not big-science, you don't get more results by throwing big research teams at a problem. It's still mostly individuals and small collaborations, and the frontier is limited by what one person can comprehend in a lifetime. There are still vast frontiers to explore, and some are quite accessible to laypersons. The modern age adds computers the mix, and faster computers might make some types of exploration easier, but the end goal is always about human-understandable results (and sometimes human-useful results).
In addition to federal research funds stagnating, the administrative overhead of doing science is also increasing significantly. A standard NSF proposal has about 15 pages of science + 100 pages of mandated documents, including conflict of interest spreadsheets, facilities lists, data management plans, postdoc mentoring plans, and so on. Every proposal has to include all this information even though less than 20% get funded. It can take at least 6 months and often over a year to get a decision on a proposal, so one has to be sending them out fairly frequently to keep up a funding stream. There is also a greater emphasis on partnerships and big collaborations to get research funding, but large collaborations have a much greater communication overhead. There are more administrative mandates that devour scientists’ time with limited positive feedback coming out of it.
The metric for how many ideas we're finding, for example, counting Nobel Prizes, is hopelessly squishy. Some are given to scientists who have made many contributions, but the committee has to select just one (Einstein for the photoelectric effect -- they picked the work easiest to describe). And the impact of, say, the first observation of slightly higher superconducting temperatures, is many orders of magnitude less the identification of the structure of DNA or the observation of gravity waves.
The "Moore's Law" analyses of the exponential rate of growth of the capability of some particular technology are indeed the result overlapping curves in which each advance in the technology saturates. And the overall timescale is really the period of a cycle which passes through innovation, development of manufacturing capability, development of a market for the products, all feeding back to stimulate new innovation. Take batteries. If you plot watt-hours per kg from 1900 (Volta) to 2000 (Li-Ion), you find a doubling time of about ten years. That takes us from telegraph repeaters through flashlights to hand held power tools and leaf blowers. Opening of markets for cellphones and electric vehicles has almost certainly changed the slope of the curve by increasing the market pull, and introducing production facilities so expensive that much more R&D falls out of the investments, and the time permitted for recovering the cost has to shorten.
So I think that asking about the rate of science without looking for the obstacles to complete the cycle of adoption may miss the actual limiting steps.
I remember reading that the Nobel was going to Einstein for relativity, but Henri Bergson, an influential philosopher at the time, said that relativity didn't show that time slowed down, but that only clocks did, and somehow managed to convince the Nobel committee to switch Einstein's Nobel off relativity.
Anton Howes (innovation historian) talks about this on a recent podcast. But also we do need to keep innovating. https://www.thendobetter.com/arts/2021/5/21/anton-howes-on-innovation-histroy-the-improving-mindset-and-progress-studies-podcast
Did you read David Roberts' post yesterday about new economic modeling on grid development?
https://www.volts.wtf/p/rooftop-solar-and-home-batteries
Big evidence on the techno-optimism side. Some of the most serious people in the business of forecasting for utilities think that it's actually _cheaper_, over the next fifty years, to de-carbonize the economy with a mass rollout of distributed energy, than to continue "business as usual" with maintaining and expanding the transmission grid. And that's completely leaving aside the health / environment concerns. ("What exactly is the business case for immolating your customers?")
Once you have mass-scale solar and storage, you can systematically shave down the peak level of load -- the worst case scenario that you hit maybe two or three days a year. The _whole grid_ has to be built so it won't fall apart under that peak load. Knocking that down by 10-20% _drastically_ lowers the cost of the transmission grid and distribution lines. That cost savings _way_ more than offsets the cost of the distributed resource investments. Like, by at least hundreds of billions of dollars.
And note that this argument is not vulnerable to the "but winter!" critiques from anti-renewables cranks. You don't actually need a ton of energy to do this kind of peak shaving -- we're talking about only offsetting like 3-5% of your daily energy use, just, the 3-5% that happens to contribute toward peak demand. Your rooftop solar can cover that on any day of the year. Maybe not in an Alaskan winter, but whatever, it works for the land where 90% of the US population lives, and for a lot of the rest, community-level wind + storage probably works too.
Such a nice piece. Thanks.
Imho, there are three aspects. More people (researchers?) are actually involved in duplication efforts rather than adding to NEW knowledge pool/outcomes. The Wikipedia model of crowd collaboration is mostly crowd duplication.
The second point is about seeding of NEW S- curves. Yes, whatever the discovery, it will have a plateau in future. The history of science shows sharp growth (middle part of S curve) when more seeding (new ideas) happens. My fear, may be controversial, is whether over dependence on mechanization is slowing down seeding, after all a new idea first comes from Thought Experiments. Photo Electric Effect was discovered by Einstein first through Thought Experiments. Right? Seeking CERN proof of new particles at bottom part of S won't help us; the best example being Higgs Boson.
The third point a little more controversial, actually I don't know much, but just wonder. Whether intelligence/creativity of homo sapiens at top of the pyramid is slowing down (S curve ???)? Let's accept one truth. The growth of science has always been directly proportional to a handful of path finders. Whether Galileo, Newton, Darwin or Einstein, only a few super geniuses seeded the later explosive growth. The situation is unlike business leaders; it's more akin to super geniuses in fine arts like painting, music and dance. Though number of dancers have increased exponentially, only a few new dance forms have been created. I haven't heard any painter better than Picasso. Has mechanization made our mind less innovative/IQ? If so, what effects AI & Robotics will have on future growth of science?
A few random thoughts.
Actually, now that I think of it, the view that thought experiments come before empirical experiments is very wrong, at least for all the most famous findings in physics.
Newton's work on mechanics, gravity and calculus obviously came from observations how things move in real life.
Galileo got famous for being the first person to use telescopes for watching the sky, and many of his smaller findings were what lead people to believe that the copernican heliocentrism was actual physical reality. And Copernicus got his model from observing the paths of the planets !
Einstein was pretty much a pure theoretical physicist, but he heavily relied on previous experiments as a seed for his ideas. Special relativity came from the results of the Michelson–Morley experiment, and his work on the photoelectric effect came from the results of Lenard's experiments.
Similarly, Plancks work started with his attempts to find a theory that could explain the findings of black body radiation experiments. The Bohr atomic model was an attempt to explain why the Rutherford model, which was build upon the findings of scattering experiments, is stable.
Darwin got his evolutionary model from his observations during his travels. Similarly, Mendel got his model of heritability from careful experimentation and observation with peas in his garden.
If we rely on thought experiments only, without empirical observations and experimentation, we are not doing science, we are doing philosophy. And not good, modern philosophy that increases our knowledge, but old, rusty philosophy that already had been done to death by the Greeks.
Similarly, all science is done on the shoulder of giants. Newton's mechanical findings were predated by gallileos experiments. Einstein's mathematical framework of SR was predated by Lorentz and Maxwell's theoretical work. Newcomes steam engine could never have been invented without dozens of previous tries by Heron, Savery etc.pp., as well as the contributions of thousands of people over hundreds of years that finally brought metallurgy to a level that allowed pressures high enough.
Also, it's no accident that almost the entirety of modern physics was invented in central Europe. And that isn't because speaking German or being Jewish gives you superpowers, it's because all these people knew each other, or at least of each other, and were in contact with each other. They Crowdsourced the advancement of physics, one individual paper at a time, building new experiments to check previous theories and making new theories to explain the results previous experiments. That's not duplication, that's how science works.
Searching for the existence of new fundamental particles is not any less important than trying to measure the exact speed of äther winds. Nobody expected that that experiment would result in nuclear power and GPS satellites.
> Photo Electric Effect was discovered by Einstein first through Thought Experiments.
Just to be pendantic, but that the energy of the electrons in the photoelectric effect depends on the wavelength of the light (as opposed to the intensity, which was the prediction of classical theories) was first shown by Lenard in 1902. Einstein's great contribution was to connect this result with Plancks work on black body radiation, and thereby providing evidence that quantum mechanics are not just a weird math trick thats useful in a specific situation, but a physical reality that is applicable to a lot of (sub-)field.
Fun fact: Leonard got his Nobel price in 1905, the same year Einstein published his photoelectric effect work that would get him his price !
Not so fun fact: Leonard was a huge Nazi and one of the main forces behind the concept (and rejection) of "Jewish Physics".
I'm staking out the middle ground in this debate.
On one hand the "Great Stagnation" has never been more than a myth: technology has continued to advance rapidly throughout our lifetimes. The real underlying complaint is that since the 1980s first-worlders have seen investment and growth shift to other parts of the world.
On the other hand we have made very little progress on truly renewable energy, and we are right now in the midst of a dire collective-action-failure crisis with our still growing fossil fuel consumption and greenhouse gas emissions and no serious plan to stop that growth. Last summer's wildfires should have made clear to everyone that we have already exhausted Earth's ability to sustain our behavior without violent reactions.
It's not only techno-optimists who seem to want to wish away the very serious problems of limited energy supplies and side-effects of consumption. Most sci-fi assumes that energy in the future will somehow become free and unlimited, even if in pessimistic sci-fi it's typically hoarded by bad guys. For example here's a beautifully grim new vid mixing animation and live action to imagine a Bladerunnerish decaying city in which good guys eke out sustenance from urban farming while bad guys hover around in hovercrafts. https://www.youtube.com/watch?v=LsGZ_2RuJ2A
One area where you definitely do need more rigor is to support your statements on solar power and batteries. Your evidence seemed to consist only of lower panel prices relative to the past and the growing quantity of installed panels. But solar is still very expensive relative to other power sources, and the growth is entirely driven by state support not markets. Producing solar panels is energy intensive, and much of the falling costs owes to Chinese state support through megadams, or cheap brown coal. Do the numbers: compare the energy costs of production to the expected energy output over the panel's lifetime, depending on where it's located. The picture is still quite grim. The solar industry still consumes far more energy than it produces. Solar subsidies (including requirements that distributors purchase solar power at regulated prices) make the problem worse, mainly by driving installations in gray-sky areas to which solar is especially poorly suited.
Batteries are of course also energy intensive products: if you compare solar alone to solar+ battery, solar + battery shifts the time of power output to when its needed, but at a very high price in terms of the system's ratio of energy output to energy consumption. In terms of minimizing greenhouse gas output, gas turbines alone beat solar + gas turbines, which beat solar + batteries (and solar alone is not an option).
>>>On one hand the "Great Stagnation" has never been more than a myth: technology has continued to advance rapidly throughout our lifetimes. <<<
It might be a myth, but there's at least some evidence to support it (mostly in the form of a slow-down in the growth of productivity). About 10-12 years ago, there was quite a bit of discussion here and there prompted by the popularity of the cable series Mad Men (don't know if you've seen it; it's great IMO). Anyway, the show depicted people in the 1960s (the series begins in the spring of 1960, as Nixon and Kennedy are getting ready to square off). A lot of people made the pretty plausible observation that the lifestyles of middle class Americans had not changed nearly as much in the ensuing 50 years (1960-2010) as they had in the preceding years (1910-1960). People in 1960 in many regards had lifestyles and enjoyed a level of technology highly similar to people a half century later. They flew on planes. They lived in single family homes. They got up. They took a hot shower. They got something out of the fridge. They made coffee. They used a toaster. They got in a car. They fought traffic. They drove to an office. They rode an elevator. They used phones. They watched TV. They listened to the radio. They went to the cinema. They shopped at supermarkets. They heated their houses with fossil fuels. They ate pizza. They wore jeans. And sometimes dresses or suits. They lived into their 70s. And so on. Pretty recognizably "modern!"
Contrast that with the change between 1910 and 1960...
Take a walk through a well preserved Roman house. Of course the technology will be missing, but you'll be surprised how familiar it looks and feels. You might remember Don's line from that series that Americans enjoy a lifestyle that in the rest of the world only kings have. And If you're in Beijing you should know that all of that technology that the US middle class had in the '60s has arrived in China much more recently. Which gets to my point about growth shifting elsewhere feeling to first-worlders like stagnation. Their lifestyles have nevertheless changed very much with computers, cell phones, the internet. And all that tech we've had since the '60s or longer has become much cheaper and/or better. Even if these days driving a Cadillac doesn't make you feel like the king of the road.
Basically all actual scientific work these days is done by people who are ostensibly trainees -- graduate students and postdocs. Professors only have time to write grants and manage.
Since the system is set up this way, that means if you want to pump money into scientific research, that money has to be used to hire graduate students and postdocs to do the work. This means, for instance, that you can't really get one person to work on a project for more than a couple years, as they eventually have to either graduate or leave their postdoc position. And since if a project doesn't pan out, it may mean a student doesn't get their degree, or a postdoc has to leave their career forever, there's a big incentive to try to "hit singles".
This is, a priori, a very strange way of doing things. I wonder how much this has changed over the course of the 20th and 21st centuries, what kinds of effects it has had, whether other countries (Japan?) don't use trainees to do all work, and what would happen if we could set up institutions to fund permanent "individual contributor" roles for experienced scientists to do hands-on research.
(Incidentally, this structure seems comparable to certain other fields which also require a lot of education and are prestige-obsessed. Consulting, big law, and investment banking are all structured similarly. The actual work is done by a lot of student interns and overworked junior employees. After a few years you either get promoted into management or leave. I wonder whether this is coincidence or not...)
Totally agree, its a crazy way to structure the system. It is designed that way I imagine because of pressure from senior academics who benefit most from having a strong hierarchy where a lot of people are subservient to them.
The metrics for measuring costs are a bit off when it comes to the historical side.
When Galileo was doing his thing, books were still SUPER expensive - like a car, or large appliance. They cost several months' worth of a day-laborer's wages, and only the rich could afford them. The precision optics for his telescopes were cutting-edge, and probably cost a pretty penny as well - AND he had to build the things himself, because he'd only just recently read about them being invented elsewhere. Same goes for the precision glass-blowing necessary to build his "thermoscope", and the precision metalworking for all those pendulums he studied. It's not like everyone just had a clock.
When Newton was writing Principia, books were somewhat cheaper, but he still would have had to fund his personal library all on his own. And his research was an idle vanity project, a distraction from productive efforts like managing his affairs.
Science was an elite endeavour back then because only elites could afford it. Pisans were probably HORRIFIED at Galileo's wasting a whole damned sack of potatoes by tossing them off the tower.
Anyways, I'd love to see an inflation-adjusted per-capita of actual science spending going as far back as possible. I'd suspect that it's increased only relatively gradually over time. The headline numbers are probably just only because we've got more scientists.
Finally, I'd also point out that "we've got more scientists" and "the low-hanging fruit has all been picked" means we've got more avenues to pursue. Two hundred years ago, Smith was lucky to be able to piece together supply and demand from early private and public data. Today, economists can spend a whole career studying the price elasticity of a single good, or collecting better labor statistics, or working out game theory. Two hundred years ago, early doctors had to rob graves. Fifty years later, they were still working out germ theory. Fifty years after that, penicillin had yet to be invented. Today, a researcher can spend a whole career studying a single antibiotic, or hunting for new ones, or teasing out the genetic complexities of an antibiotic-resistant bacterium.
If progress is slowing, it's because the amazing progress we've made has opened up entire universes of new questions, not just that those questions are harder. My Master's thesis was pretty simple - "low-hanging", if you will. I just built a device that would help a patient keep their foot muscle steadily flexed during a minutes-long MRI scan. It wasn't even that great at its job - the stupid pneumatics I used kept losing pressure under the flexing. But there's no reason to build it unless you have an MRI machine, and you know what diabetes is as a disease, and you know that diabetic neuropathy (of the foot) is a symptom that you even could or would want to diagnose and manage. The point is, every new scientific discovery poses a hundred new questions. Even with a vastly more-educated and larger population than ever, I think we're just kind of losing the battle with the exponential nature of available things to research.
Think about how many scientists in previously underdeveloped countries we can now discover. Think of all the women left out of science in the 1920s. Think of all the high school students doing genetic assays for science fair. The underlying model may be correct, but there's huge untapped potential out there.
The other problem I have with the conclusion that science is slowing down is that maybe science just isn’t very parallelizable. So throwing more people at one problem ie building smaller chips. Doesn’t speed things up much as they all just end up working on similar problems as each other and reproducing each other’s work. So of course throwing more people at one problem will make it appear like diminishing returns. But if you might actually get a very similar growth curve with less people just because it takes a certain amount of time to learn what the problems are and come up with a way to fix it and having 10 people do that independently doesn’t speed it up. In computer science most problems take a lot of effort to make them parallelize well. It’s not clear that we have put that effort in at all with scientific discovery. Particularly something like chip manufacturing where it is just many companies competing with each other and not communicating
Is there any research that explains where federal R&D spending is going instead? I imagine a lot of it's going to entitlements. In the 2000s a lot went into wars, though I suppose the sequester curtailed some of that.
I'm in over my head, but that has never stopped me before.
Thought experiment: put yourself in the place of the person who, two or three millennia ago, dreamed up the mechanical helix, the waterwheel, mechanisms powered by steam, an air pump, water organ, a chain drive (in a repeating crossbow), the gimbal, etc. The guy was sure he'd become a household name and get as rich as Croesus, but ... nope.
Humanity didn't get a true steam engine until the 18th century. The entire Roman empire might have been chock-full of watermills but wasn't. So my question, with the caveat that this is the first that I've become aware of this mega-debate on stagnation, is this: how much does it matter?
That is, IF social and cultural forces (which I take to have been the reasons why the Greeks and Romans were so much better at generating ideas than capitalizing on and implementing them) are actually far more powerful and influential than a great insight or clever invention, are people asking the right questions in this stagnation debate?
It's not THAT hard to imagine a point in the future when only a significant minority of people will be willing to get vaccinated against a virus like Covid-19. Social and cultural forces! Maybe the danger is not so much that we're running out of ideas or that science is getting too expensive and cumbersome to do well. Maybe it's that everything NOT-science is failing to keep up, even regressing in some areas. The surge of religious fundamentalism in the 20th century is a prime example, but not the only one.
Running out of good ideas? No problem. Let's actually try teaching evolution in Tennessee or Arkansas high schools and see what that does for us.
One aspect of rising cost that I have always wondered about is whether getting each generation of new scientists to the forefront of the field gets progressively more expsensive over time? We are not only standing on the sholders of giants but on the shoulders of giants themselve standaing on the shoulders of giants, etc. It takes a lot of giant climbining to reach the frontier. Educational technology may then be important in improving productivity of science back to historical trend.
Perhaps a simpler question to answer: is mathematics slowing down? I don't see any particular sign of that. Mathematics is not big-science, you don't get more results by throwing big research teams at a problem. It's still mostly individuals and small collaborations, and the frontier is limited by what one person can comprehend in a lifetime. There are still vast frontiers to explore, and some are quite accessible to laypersons. The modern age adds computers the mix, and faster computers might make some types of exploration easier, but the end goal is always about human-understandable results (and sometimes human-useful results).
Even if fewer discoveries are being made, there will still be those who apply those new discoveries in different ways.
Quick note: it's "et al.", not "et all."