Interesting analysis but I don't quite agree with it.
First problem - the decline of corporate R&D is over-stated. These graphs are showing relative shares, but all that means is that governments flooded universities with money which then spent it on expanding their quantity of output. Quality, however, is often missing. I've been in the tech industry for a long time and seen a lot of R&D happen, but the reliance on academic research is pretty minimal even in the AI space. Part of my job involved reading academic papers for a few years, but I eventually gave up because the ROI was zero. Lots of papers that made enticing sounding claims but when examined carefully had caveats that made them useless.
Second problem - the distinction between research and development. Very important for academics, not important at all for companies. When I think about the successful big tech projects I've seen, there was no clear line delineating the two. Experiences and results of development informed research, and research was often the same thing as development, just on more speculative features. The same people would do work that smelled like research for six months, then development of those research ideas into a product launch for another six months, then back again, with no change in job title or function. I myself have done such projects in the past.
Thirdly - the assumption that "papers published at academic conferences" is the same thing as output. Very few people in the corporate world care about publishing papers. It's just a distraction. Unless they came from academia and publishing has become a part of their identity, or they want to keep the door opening to returning, papers just aren't going to happen. The only peer reviews you care about are actual reviews by your actual peers, i.e. managers, because that's what determines if your R&D turns into promotions or bonuses. Google AI research is really an odd duck in this respect in that they seem to care a lot about getting published but that's really rare.
Obviously if you just plot by papers published then universities will look very productive and it'll seem like there's a pipeline or supply chain or something here because that's what universities are paid to do and what they optimize for. If you've actually been in real companies doing cutting edge work with tech, like I have, then it all looks like a shell game over the definitions of arbitrary words. Companies invest a ton into "R&D", just not in a way you can neatly sum and plot on a graph. Often they struggle to even measure it for their own purposes!
Finally - a nationalized energy company that can fund research? I know this is a left wing academic blog but come on. That would wreck the energy markets and deliver absolutely nothing in the way of usable research output. The west already dumps way too much money into public/academic energy R&D labs, as well as VCs pumping up private firms, and then Google also funded lots of energy R&D in the past too (see here: https://www.google.org/pdfs/google_brayton_summary.pdf ). There's very little to show for it because doing proper R&D that matters requires experience in the field, so it all gets done by oil companies, wind turbine manufacturers etc.
I mostly agree with you. Your points chime with some of the insights in Nassim Taleb's antifragile where he pointed out that Schumpeter had the wrong idea: the link between research and development is heavily overhyped. A lot of research has limited use and technological innovation often depends more on risk taking tinkerers than it does on stultifying research papers.
Two, I'm also quite skeptical about energy R&D: to me, the green energy revolution hinges on battery technology. And with all I've read on the topic, the odds don't look good. Apart from biotech, which merits legitimate excitement, everything I see is hype, hype, hype. At the very least, it seems like we are just at the beginning of several possible breakthroughs and nowhere near even halfway.
Yep. And note that there's no reason a nationalized electricity utility would spend any money on batteries, nor have any relevant expertise (advanced manufacturing). Batteries don't change the quantity of electricity sold much, as the user still has to charge them. And massive battery farms for riding out cloudy windless days just doesn't really work, physically.
I agree with you. Honestly, so much is riding on a revolution in battery tech: smaller devices, electric vehicles, energy storage, et cetera. But the chances of it happening are so low. It's not so much of an exaggeration to say batteries are the greatest disappointment of the modern era.
The good news is that batteries have improved enormously over time. I think the sense of disenchantment that surrounds the space comes from a couple of places.
One is just one of unrealistic expectations. People said, "let's convert our grids and cars to 100% renewables" which was an absurd goal, and then blamed battery makers for not delivering the impossible. But if you look at things like smartphones or Teslas, they're possible partly due to big accumulated improvements in battery tech.
Another is the frequency with which universities announce what sound like massive battery breakthroughs, which then don't seem to have any real world impact. Again the problem is that academic research tends to get ignored on the ground because it's not useful. First questions for any new tech in the physical world are "how can we manufacture it at scale?" and "how can we do so at reasonable cost?". NSF grantees don't have factory or business experience so usually ignore these questions. Announcing breakthroughs is easy if you don't have to care about details like how to actually make them or whether anyone would buy them. Musk talks a lot about this problem - when Tesla started they thought designing the car was the hard part. Then they realized, no, designing the factory that makes the car is the hard part. The Tesla Gigafactories are partly battery factories for that reason.
From my observations, battery performance seems to be fluky; i.e. great improvements in performance often turn out to be irreproducible, perhaps due to some undetectable variation in one batch of chemicals.
I appreciate the intelligent reply, but 'Enormously' is a stretch. Lithium ion batteries have improved by a factor of two since they were introduced. All the other theoretical alternatives have serious problems. We had batteries in 1800 before we had the first commercial exploration of oil in 1859, before the commercialization of electricity, before cars. Batteries are one of the few technologies in the world whose modern incarnation won't surprise anyone who woke up from the dead two centuries ago.
I agree on the unrealistic expectations. Most people don't appreciate how the grid works. A lot of people assume it's like a passive reservoir of energy rather than an high-wire performance to match supply and demand every minute.
Oh, it would even be nice if that academic research had practical utility. Commercial utility can take care of itself down the road like it did with photovoltaics. But it doesn't. It's just people figuring out the thermodynamic limits of different theoretical combinations, doing some calculations, and saying we've shown it's possible to have serious power density improvements if we use x and y as base materials. It's rather tiring at this point.
You actually DON'T want academic research to have a practical focus. The valuable stuff is the theoretic stuff. In fact, we need a lot more theory to make batteries more effective. That's one of the big changes starting around 1870. Before then, you could do anything by just futzing around and tinkering. After then, it became increasingly important to have some idea of what you were doing which is why universities became increasingly important.
Batteries are much different from when I was a kid maybe 50 years ago. The kind you put in flashlights seemed to last for half an hour back then. Modern ones seem to last for days even with incandescent bulbs. When I used to cut batteries open, it was rather obvious how they worked from what I found inside. I don't have a clue with modern ones. The important stuff is in the coatings and junctions. The ones with a chip in them are beyond me.
If someone dropped by from the early 20th century, they'd be amazed to see an emerging fleet of electric cars competing with gasoline cars. They'd be amazed by our flashlight batteries, possibly as much as I am. The whole idea of a grid load balancing battery would have been science fiction. Even the dumb little backup battery I have for blackouts would have been amazing. It has an AC plug and lasts for hours.
Judging from the academic papers I've read - OK, sort of scanned but earnestly - we are all going to be amazed by what batteries can do in 20 or 50 years. There are all sorts of fascinating ideas being tried. I have no idea what technology those future batteries will use, but I am sure that we are far from some natural limit like the speed of light.
Where are you getting your data? I work in energy infrastructure, and batteries have gotten three times as dense (energy/volume) and ~90% cheaper per kilowatt-hour since 2010.
This CleanTechnica article has the graphs, but I can assure you it comes through in practice as well.
Thank you for the correction. I was under the impression the improvement was more than 2x. What metric is that for by the way - energy density? I think a lot of the battery R&D done by corporate labs went into things like reducing charge time or better lifetimes.
I've been reading a history of the lithium battery and electric car. Modern battery technology started at Ford with the first battery that actually shoved ions into a solid. There was a paper published. Ford got a patent. This was in the 1960s when Ford had a ton of money and tax laws encouraged research rather than stock buybacks. The first cobalt electrode was developed at Oxford, though the UK atomic energy research group got the credit and royalties on it in the 1970s. The guy who invented it moved from MIT Lincoln Labs when Congress started imposing rules requiring research be useful.
That's all I've read so far. Lithium batteries were brought to market in the 1990s by Japanese video camera manufacturers. They worked out the kinks in their research labs. Those batteries also went into laptop computers. It was another 10 years before they were being used in electric cars. Now they are being used in Ford pickup trucks, so Ford is getting some payback for that research they funded 50+ years ago.
The story involves Bell Lab type industrial research centers, like the one at Ford, universities, research centers, camera manufacturers and probably a lot of other players. From a corporate point of view, the time scale can be daunting. Only governments, non-profits and, perhaps, closely held private corporations can afford the necessary time scale.
Consider solar energy. The photoelectric effect was known in the 19th century. Einstein explained its quantum effects in 1905. Selenium cells were used for light detection early in the 20th century, but modern solar cells are a spin off from integrated circuit technology. The technology was taken over the line by Chinese government's five year plans that increased the manufacturing base and drove down costs. Meanwhile, laboratories around the world are experimenting with amorphous solar power cells that could be much cheaper to make, multi-spectral cells that can exploit a broader range of light, direct to chemical reaction cells that can desalinate water or fix nitrogen and so on. I read papers about this stuff all the time. Which ones are going to define our future? Who the hell knows?
There are a lot of interesting research papers published in academia, and I've found many can be practical. However, there's just too large a gap between what academics do and what industry is doing. Scaling research is an underdeveloped field.
My experience was that it varies a lot by sub-field. Some are practical minded and clearly applicable but often you find the best papers are using grants from corporates anyway. Others are flooded with theory papers proposing things that may be literally impossible to implement. That's ignoring all the fields like social science where replicability is a concern!
Sure. The electrical engineering papers in IEEE are quite interesting research but often lack a direct path to a product. Social science research often has no discernable commercial value.
Things to read - I don't know what you'd find convincing. Think about the big green success story of the past 15 years though. It's the huge drop in price of wind and solar energy. Go investigate how that was done and I think you'll find it happened due to constant iterative improvements, lots of small innovations that we don't tend to hear about, and it'll have been driven by on-site R&D efforts by manufacturers. For example a big driven of falling solar prices is better fab tech but the public sector doesn't really do fab research.
Re: your graph. Pretty much anything measured as "fraction of total GDP" is going to be an astronomical sum! But to repeat my point above, an assumption of your graph is that you can precisely measure R&D spend. In reality it's pretty hard to measure or classify work as research/development/other. Companies aren't that interested in making such precise allocations because it's an arbitrary distinction that rarely exists in the real world.
A big part of why Bell Labs hasn't been *visibly* recreated is that modern corp research and development is much better integrated with actual product development than in the past. Newer companies tend not to have siloed research divisions, or if they have one it's not very important. For example, a big reason Google was able to out-execute Microsoft is that Microsoft created a university-style research silo in the form of MSR and hired a bunch of ex-academics to staff it. MSR did better than most such departments at getting things into products but even so, their integration rate was tiny compared to what Google pulled off every day. As a consequence it's hard to figure out how much a company like Google or Apple spends on "research" because it's not a neatly defined bucket, and that's just one company, doesn't even include startups. Go look at their earnings reports and you'll notice that they might easily just report 100% of their engineering salaries spend on R&D, for example.
Solar panels were another thing developed by Bell Labs! I guess if you want to consider Bell Labs as existing due to government support then sure, but it's not actually a part of the government. Governments subsidized it but the heavy lifting here was all done by the private sector.
If you want to represent a normal sum of money use dollars. Nobody represents small amounts of money as a tiny fraction of GDP. The whole point is that GDP is enormous.
The first solar panel was invented in the late 1800s by a guy named Fritts actually.
The whole "Government or Private" argument is a bit ridiculous generally. It's both and always has been. Is the iPhone a commercial product? What makes it useful? (Touchscreen, GPS, Internet - all directly government funded)
Professor Scott Galloway had the figures in his "Welfare Queens" post that argued the U.S. government is the most successful venture capital firm in history.
The $190 billion in U.S. research funding in 2019 was apportioned by government making the plurality of funding, 43%, followed by business, 31%, nonprofits, 14%, and universities, 13%. Much of this is government transferring its 43% to collaborators in the other three sectors, so the work is symbiotic.
The transistor is not basic research like Cosmology. It was based on basic research done a half century earlier by Bohr, Plank and Einstein.
Some misses. The greatest industrial research center in America for 200 years was the Dupont Experimental station. And they very much capitalized!! Nylon. Gunpowder. Lycra. Synthetic fibers like polyester. As well as Agriculture.
Sarnoff labs... TV and Radio development but not the basic research... Marconi et al
GE Schenectady- plastics basic research for wire insulation led to polycarbonate and others. Electrical..well developments not really basic
Still Bell Labs was the premiere research organization of the world.
Bell labs invented fiber optics. In the late 50s. Used as a super top secret sensor deep underwater for detection of submarines. The laser also needed in that scheme they coinvented. But the basic research for both was based on basic optics of Newton and Maxwell, and Einstein.
Well, the foundation of quantum mechanics was established by Bohr and Plank. You perhaps underestimate Einstein contribution to quantum mechanics, and the quanta of light. He stood alone in that. It's called an LED ..a semiconductor today. He also developed quanta statistics and thermodynamics with Bose.
Wigner gets credit. Heisenberg none really. It was Einsteins work that led to and predicted stimulated emissions. Called the laser.
I worked at AT&T Labs/Research for several years as a technical staff member, way back before I went to get my PhD in computer science. (For the uninitiated: AT&T Labs was a spinoff of Bell Labs that took a fraction of the computer science research from Bell, leaving the rest with Lucent.) Most of the people I worked with were old Bell Labs hands.
From my perspective, the main benefit of a large research lab was the huge concentration of brilliant researchers. I'm now a CS professor and love my lab, but I only have a small number of local collaborators. At AT&T you could have lunch with Peter Shor and Bjarne Stroustrup as well as a dozen other subject-matter experts. This produced an amazing number of good new ideas, which resulted in an impressive amount of research productivity. If you put a bunch of smart researchers in that environment I imagine they're going to thrive, as long as you lay the groundwork for (1) those people to work on problems that are relevant to the organization, and (2) you have a plan to deploy and monetize that work.
The real problem with AT&T is that by the time I worked there (circa 2000) it had absolutely zero ability to monetize most of the research it produced. Silicon Valley was in the grips of the first dot-com boom and firms on the West Coast were doing relatively amazing things. AT&T was hidebound and constantly tossing good research on the floor. The few examples where they did attempt to monetize were even more embarrassing: I worked on a software project trying to sell music over the Internet, and it was obvious that we were completely outclassed in this effort (Apple eventually licensed some of the audio compression patents AT&T owned, and the rest is history.) In principle we could have been a hotbed incubator for startups, but: (1) we were in New Jersey and not Silicon Valley, and (2) management didn't really know how to handle this. Even a "Intellectual Ventures"-style patent strategy might have been lucrative, but AT&T did not execute that. Thankfully.
This touches on something Noah misses. Its not just that universities do research on cosmology (there is lots of application focused research, just not necessarily in the physics/astronomy department) or that they tend to be siloed.
The biggest difference in shifting research from corporate labs to universities is the people doing the research! You go from trained professionals to kids that dont have a clue what theyre doing. I have a PhD and work in corporate R&D. I am much better at research now than I was during my PhD, where it often takes 3 years to figure out what the heck youre doing.
Yeah, but you also go from kids willing to put in 100 hrs a week for subsistence wages to adults who have an actual life they would like to support, and often wish to see their families to boot.
I know that's not what employers expect, but it seems that once people develop a lot of expertise and experience, they start to feel that way.
Some observations from someone who spent some time in academic research and industry (pharma regulatory affairs, not R&D just to be clear). A good summary of the academic/industrial nexus can be found in the symposium proceedings held at U Penn back in the early 1980s, "Partners in the Research Enterprise: University-Corporate Relations in Science and Technology." There were speakers from both sectors and it was really an all star cast with then Congressman Gore (pre-Internet invention), Kenneth Arrow, Bart Giamatti (pre-Baseball Commish), Jim Wyngaarden (NIH Director) and others. There were some very good case studies presented including an intriguing agreement between Du Pont and Notre Dame back in the late 1920s that goes to show these types of agreements are not new.
Bell Labs is a special case (and one of the things Noah did not mention was ground breaking work in computing, the 'C' programming language was developed there). Ma Bell had a monopoly and had both the research arm, Bell Labs, and the manufacturing arm, Western Electric. They could afford to sponsor a lot of basic research as they did not face the type of competition that other businesses were subject to.
Google has a huge market share with a cash flow that allows them to carry out a lot of fundamental research such as in AI as noted by Noah. They also do a bunch of other stuff that is not necessarily related to their core business. Along with other software companies such as Microsoft, this is perhaps the best version of the industrial research lab that Brad De Long writes about in his recently published book, 'Slouching Towards Utopia.'
This is not to say that all corporate R&D is robust. In the sector that I worked in, biopharma, there has been a shift away from corporate R&D to direct purchase via merger and acquisition. It was more cost effective for a pharma company to purchase a competitor that might have a blockbuster drug as the cash flow from sales quickly covers the acquisition cost (Pfizer were among the first to do this big time). The same thing continues to go on with the purchase of biotech assets that have originated in academia, spun out into a company that got early stage VC funding and then subsumed into a pharma company. Since the biotech revolution began in the 1970s, many companies (I don't have a good number but it's well over 1000) have been founded but only a small number remain independent (Amgen, Gilead, and Biogen), the rest either stumble along, get acquired or go out of business.
Yes, as Noah points out the energy sector is one area that might profit from a better R&D model but it's mainly in the distribution and storage areas. Alternative energy source production seems to be going along just fine under the current system.
In sum, there are already adequate measures in place that facilitate technology transfer. Occasionally, a need to do something above and beyond comes up such as the COVID-19 pandemic that required massive amounts of money and effort to develop both a vaccine and therapeutics. The first of these was accomplished but the second, not so much. There is a continued need to look at lessons learned.
Great article as always. I personally don't think Bell Labs will make a comeback. I like to err on the realistic side and a lot of the tailwinds that enabled Bell Labs are gone.
I have only two contributions. First, is it possible that for the last thirty years, we have been operating with a faulty model of innovation. These days, we glorify the young novice and the talented upcomer. What is first principles thinking after all apart from a glorification of the advantages of the amateur. But it seems to me that we have confused one field ( Software where that isn't a problem and could even be a boon) with everything else, where real expertise is necessary and the gap between research and development is lengthy, and the marginal cost of distribution is significant.
Of course, this ties in to venture capital too, which is uniquely suited for ICT technology and not so apposite for a lot of other industries. The world of bits and bytes has had quite a lot of innovation. The world of atoms and stuff ( a funny contrast since everything is still atoms and stuff), not so much.
The biggest R&D innovation for humanity happened two years ago (the vaccine), but the side effect was it pushed antivaxx, conspiracy fantasy, and fashoid politics into a competitive mainstream worldview.
The coronavirus vaccine was perhaps the most transformative scientific event since mass adoption of the internet (1990s) and the moon landing (1960s).
The vaccine rollout happened in a window of time after the election but before the insurrection and Biden transition. That was the time when the vaccine was rolled out initially only to seniors and high health risk groups. Both Trump and Biden received the first of the vaccines, both were eligible because of age and national security concerns.
Operation Warp Speed was entirely under Trump's tenure. It could have been one of the few genuinely successful policy initiatives Trump could have claimed credit for, but the rollout to the general public happened in Biden's first months in office.
Before the vaccine, though, America was an utter shitshow and it would be a prelude to how slipshod our vaccination efforts turned out to be. And initially, vaccinations weren't necessarily polarized. Political party wasn't a predictor of vaccination or hesitancy. Among Republicans, age was a bigger determinant. Voters who had the lived memory of the polio scare of the 1950s were more likely to get the vaccine. Among Democrats, the lowest vaccination uptake was going to be among Blacks and Latinos. The hesitancy rate among Latinos was higher than expected.
Very true. I was deeply surprised at the reception towards those vaccines. Yes, the media could have done a better job at painting a holistic story. But without that vaccine, we are still in a full-blown pandemic. It was a remarkable achievement on all fronts and it illustrates the nadir our societies have reached: fake innovation is praised to the high heavens while real innovation is ignored or disparaged.
They had an outsized impact on R&D largely by exploiting AT&T's government-backed monopoly on communication systems. If you wanted to work on (say) fiber optic communications, there was only one place to work. If you talk to old timers they love to talk about how great it was that they got paid to sit around and have lunch with Nobel laureates, but they never seem to recognize how overall bad it was for society to have all those productive researchers locked into an organization that had no incentive to actually ship anything.
I worked for DOE supported national labs in the 90s and early 2000s -_ Sandia and Lawrence Livermore. These were products of WW2. The former was managed by Bell Labs. These places were curious in part because you needed a security clearance to work there and in part because of their traditions. On one hand when I worked at these places they were quite bureaucratic. One the other hand scientists greatly resented government oversight of their work despite the fact that almost all the funding came from government. One would hear that once these places were run by scientists for scientists. Or that if you want nuclear weapons you need to give us whaterver funding we need. I always presumed that during WW2 things could not have worked this way. Sometimes the powers that be just need to know when to get out of the way and let folks do their work.
There is a joke that went around about the 3 nuclear weapons labs -- Sandia, Livermore and Los Alamos. It says alot. It goes like this. When DOE says jump Sandia answers how high sir. Los Alamos says Up Yours. And Livermore in 3 months puts together a $500 million jump management program.
It would be interesting to know how much research funding is being spent on research of persuasion, broadly. I suspect an increasing amount. It's cheaper than a physical lab of course but I see the persuasion ecosystem -- buy this, don't eat that, do this and you'll live an extra 25 years, ask your doctor about this -- ballooning in my lifetime, a good deal of it grant funded, when it comes to public health. But private food consumption persuasion is also big business and there are apps for everything. None of this will build us underwater cities or floating airports.
The US total marketing spend in 2021 was about 224 Billion USD, or almost 1% of GDP. All marketing is essentially "funding being spent on research of persuasion".
It has become glaringly obvious that the persuasion industry is perhaps the industry most destructive to humanity but still legal, a topic widely explored in science fiction.
The pharmaceutical industry additionally exploits the field of patent law research and development, which, like persuasion, features a much better ROI than basic pharmaceutical research these days. The failure rate of drugs in the research pipeline and the increasingly enormous cost involved forces the direction of actual pharmacological research to be slight but patentable modifications to an existing drug which can then be marketed as the new improved version of an old drug which is nearing the end of its patent life. No real improvement in clinical use, but no financial risk either.
Conversely, profits on generic drugs which are off patent are too low to allow companies any research at all.
I worked at Bell Labs from 1969 to 1973 and the Xerox PARC from 1973-1991.
Both entities were derived from monopoly.
Bell Labs was part of the ATT supply chain which was disassembled by anti-trust laws and the world was in disarray from WWII.
Xerox failed to thrive due to managerial incompetence and a Corporate culture derived from the auto industry that could not convert PARC invention into business.
I worked at RCA labs for a summer in 1961. RCA failed because management made a lot of bad choices and bet the farm on them. GaAs vs Si; Capacitive Electronic Disk, and more....
Currently the US does a lot of research but no longer invests in developments that can take decades. That is now done by others outside of the US.
There is no economic basis on recreating the ATT supply chain of which Bell Labs was only one part.
“New” transmission lines should be a non-starter for multiple reason. To name just a handful:
- stringing transmissions lines about be hundreds of miles of climates-change-induced desiccated forest when high-wind events are now common is a recipe for more wildfithe amount of energy lost via transmission of electricity over long distances is a waste we can’t afford.
- improvement in battery technologies (silicon anode for 100% more energy density; Brakeflow technology to prevent fires when Li-ion batteries short-circuit) eliminates the problem of intermittency for solar and wind farms, as well as danger of fires, and electricity lost via long-distance transmission.
- thousands of miles of transmission lines is a waste of metals (copper, steel, etc.) that should be devoted to substations (wind- and solar-energy farms near major metropolitan areas).
As with the trend in many other business sectors, you move the product/service closer to areas of large populations. To do otherwise is a waste of money, personnel, metals in tight supply, etc. — all to purchase a public safety problem (fire) and more-expensive conventional energy.
>>The inflationary theory of cosmology and the invention of the transistor are both “basic research”, but their economic value is very very different.
It's important to note, though, that both sprang from Einstein's work on quantum mechanics and relativity, as well as the work of Volta and Tesla and others. We just don't get to have transistors without a bunch of people actually doing the more "useless" kind of basic research. So, if we're building a model of economic value of innovation, we have to not only factor in the proximate invention of a technology, but the more distal innovations as well.
That said, the distal innovations are still probably a relatively low proportion of the "Most-Optimal Gross National Investment Portfolio for Innovation" that we're essentially trying to formulate here. But the vast majority of academic research money isn't going into wacky topics like the evolution of duck genitalia; no, it's going to worthwhile projects.
IOW, Noah, I'm saying we don't really have a maldistribution problem within *topics* of basic research, we have a maldistribution between the more innovation-distal academic research *institutions* and the more valuable innovation-proximal private research *institutions*. And that maldistribution isn't driven by the federal government overallocating to the former and ignoring the latter, it's driven by the private sphere's abject failure to keep investing in their institutions. Which itself is, obviously, a result of an overly financialized economy and badly-counterproductive-to-the-point-of-bordering-on-evil philosophies like shareholder value. Moreover, even during the recent tech boom, all that ZIRP-fueled VC money was notoriously *not* going to your deep-innovation super-startups. It's clear that the private sector can't be trusted to invest in innovation. Like, ever.
I recognize that shareholder value and overfinancialization are complex problems that simply can't be solved with some handwaving and my own pretty rhetoric, but I think there's value in at least diagnosing the problem correctly. At the end of the day, I think we still agree that it's unlikely Bell would be replicated. But I think where we diverge is that I don't think we should necessarily focus on defining all innovation-proximal sectors as public utilities and then creating utility monopolies to serve them as the "One Cool Trick" that will magically make the next Bell for every industry. In fact, it kind of sounds like a fast-track to dystopia.
We're probably better served by working on the hard problems like shareholder value and overfinancialization, and then championing public policies that will simply incubate and empower deep-innovation super-startups to come up with the next innovative business model that, when combined with the right confluence of other circumstances, *will* create another generation of Bells.
Interesting piece. I would argue that another approach might be what the Obama administration tried to do through the SBA: the Regional Innovation Clusters. The idea was to link regional universities and anchor large firms with the regional small and medium-size business ecosystem around a specific set of technologies (e.g., polymers, drones, wood products, logistics) in which it had a comparative (often historical) advantage. This would improve the pipeline of talent, funnel basic research into the market, and facilitate collaborations and client acquisitions (often at different stages of the supply chain).
The evaluations done of this initiative were quite promising and this wasn't a costly endeavor, relative to other actions the Federal government can take (and has taken).
Thanks Noah. I started my career at Bell Labs and worked at Xerox PARC as a grad student. They were great places in their day. I remember one morning at PARC I was struggling with the coffee machine and a nice man came over and showed me how to use it. It turned out he was one of the top physicists in the world.
I gifted a subscription to a very smart friend. His comment was, "He [that's you] makes complex issues easy to understand. "
An orthogonal argument to why the U.S. will likely never see a return of large, centralized for-profit research divisions is that they do not generate a sufficient ROI to shareholders. As cited above, the vast majority of the profits generated by the invention of the transistor did not return to the Bell system, but to downstream companies that successfully monitored the inventions. Similarly with Xerox PARC where Alan Kay invented the modern windowed system. It is a business case issue: the Bell system was in the business of telephone calls, not transistors or the eventual development of microprocessors. Xerox was in the business of selling copiers, not mini or microcomputers. A further example is Kodak, in the business of selling film, couldn’t successfully monetize digital even though they were technologically ahead of Japan, who did manufacture the cameras.
Which leads to a different question about the technological ecosystem. Would these trajectories have been different if the U.S. had a coherent national industrial policy, such as Japan’s MITI, in the latter half of the 20th century?
Many of my friend’s dads growing up were Bell Labs scientists, others worked for big pharma companies HQ’d in the same area of NJ.
Research funding for universities certainly made a huge difference and provided a career option for scientists interested in research.
However, it was the shift toward shareholder value and lower interest rates and inflation starting in the 1980s that really drove the shift. Lots of scientists and R&D depts got laid off. Shareholders didn’t want to pay for expensive, risky bets. I worked at a chemical company where we had a 15 percent IRR threshold for projects back when inflation was 5 percent and interest rates were 8 percent. In the 90s as rates and inflation fell even lower, our IRR threshold was bumped up closer to 20 percent. Completely non-sensical from a CAPM point of view unless we wanted to take massive risk (then again, equity index investors think they will get 15 percent returns in perpetuity while the risk free rate is 2 percent - all part of the same insanity). Of course, my company didn’t want to take massive investment risks. Instead we invested less or went after the low hanging fruits of cost cuts and outsourcing (and later, share repurchases).
Fortunately, VC and PE in some fields have sponsored innovation. In pharma, the big payoff is still a new drug. In tech, it is largely figuring out how to steal and monetize personal information and sell more ads.
Google has invested a tonne, though - most of it wasted and wasteful. University research is mostly a waste as well and a fairly corrupt/incestuous system to boot.
Also remember in the 1980s that Japan was in fashion - Japan was about continual improvement and leveraging/implementing others IP rather than creating from scratch.
Interesting analysis but I don't quite agree with it.
First problem - the decline of corporate R&D is over-stated. These graphs are showing relative shares, but all that means is that governments flooded universities with money which then spent it on expanding their quantity of output. Quality, however, is often missing. I've been in the tech industry for a long time and seen a lot of R&D happen, but the reliance on academic research is pretty minimal even in the AI space. Part of my job involved reading academic papers for a few years, but I eventually gave up because the ROI was zero. Lots of papers that made enticing sounding claims but when examined carefully had caveats that made them useless.
Second problem - the distinction between research and development. Very important for academics, not important at all for companies. When I think about the successful big tech projects I've seen, there was no clear line delineating the two. Experiences and results of development informed research, and research was often the same thing as development, just on more speculative features. The same people would do work that smelled like research for six months, then development of those research ideas into a product launch for another six months, then back again, with no change in job title or function. I myself have done such projects in the past.
Thirdly - the assumption that "papers published at academic conferences" is the same thing as output. Very few people in the corporate world care about publishing papers. It's just a distraction. Unless they came from academia and publishing has become a part of their identity, or they want to keep the door opening to returning, papers just aren't going to happen. The only peer reviews you care about are actual reviews by your actual peers, i.e. managers, because that's what determines if your R&D turns into promotions or bonuses. Google AI research is really an odd duck in this respect in that they seem to care a lot about getting published but that's really rare.
Obviously if you just plot by papers published then universities will look very productive and it'll seem like there's a pipeline or supply chain or something here because that's what universities are paid to do and what they optimize for. If you've actually been in real companies doing cutting edge work with tech, like I have, then it all looks like a shell game over the definitions of arbitrary words. Companies invest a ton into "R&D", just not in a way you can neatly sum and plot on a graph. Often they struggle to even measure it for their own purposes!
Finally - a nationalized energy company that can fund research? I know this is a left wing academic blog but come on. That would wreck the energy markets and deliver absolutely nothing in the way of usable research output. The west already dumps way too much money into public/academic energy R&D labs, as well as VCs pumping up private firms, and then Google also funded lots of energy R&D in the past too (see here: https://www.google.org/pdfs/google_brayton_summary.pdf ). There's very little to show for it because doing proper R&D that matters requires experience in the field, so it all gets done by oil companies, wind turbine manufacturers etc.
I mostly agree with you. Your points chime with some of the insights in Nassim Taleb's antifragile where he pointed out that Schumpeter had the wrong idea: the link between research and development is heavily overhyped. A lot of research has limited use and technological innovation often depends more on risk taking tinkerers than it does on stultifying research papers.
Two, I'm also quite skeptical about energy R&D: to me, the green energy revolution hinges on battery technology. And with all I've read on the topic, the odds don't look good. Apart from biotech, which merits legitimate excitement, everything I see is hype, hype, hype. At the very least, it seems like we are just at the beginning of several possible breakthroughs and nowhere near even halfway.
As part of a hardware startup of risk-taking tinkerers, amen.
Good luck to you.
Yep. And note that there's no reason a nationalized electricity utility would spend any money on batteries, nor have any relevant expertise (advanced manufacturing). Batteries don't change the quantity of electricity sold much, as the user still has to charge them. And massive battery farms for riding out cloudy windless days just doesn't really work, physically.
I agree with you. Honestly, so much is riding on a revolution in battery tech: smaller devices, electric vehicles, energy storage, et cetera. But the chances of it happening are so low. It's not so much of an exaggeration to say batteries are the greatest disappointment of the modern era.
The good news is that batteries have improved enormously over time. I think the sense of disenchantment that surrounds the space comes from a couple of places.
One is just one of unrealistic expectations. People said, "let's convert our grids and cars to 100% renewables" which was an absurd goal, and then blamed battery makers for not delivering the impossible. But if you look at things like smartphones or Teslas, they're possible partly due to big accumulated improvements in battery tech.
Another is the frequency with which universities announce what sound like massive battery breakthroughs, which then don't seem to have any real world impact. Again the problem is that academic research tends to get ignored on the ground because it's not useful. First questions for any new tech in the physical world are "how can we manufacture it at scale?" and "how can we do so at reasonable cost?". NSF grantees don't have factory or business experience so usually ignore these questions. Announcing breakthroughs is easy if you don't have to care about details like how to actually make them or whether anyone would buy them. Musk talks a lot about this problem - when Tesla started they thought designing the car was the hard part. Then they realized, no, designing the factory that makes the car is the hard part. The Tesla Gigafactories are partly battery factories for that reason.
From my observations, battery performance seems to be fluky; i.e. great improvements in performance often turn out to be irreproducible, perhaps due to some undetectable variation in one batch of chemicals.
I appreciate the intelligent reply, but 'Enormously' is a stretch. Lithium ion batteries have improved by a factor of two since they were introduced. All the other theoretical alternatives have serious problems. We had batteries in 1800 before we had the first commercial exploration of oil in 1859, before the commercialization of electricity, before cars. Batteries are one of the few technologies in the world whose modern incarnation won't surprise anyone who woke up from the dead two centuries ago.
I agree on the unrealistic expectations. Most people don't appreciate how the grid works. A lot of people assume it's like a passive reservoir of energy rather than an high-wire performance to match supply and demand every minute.
Oh, it would even be nice if that academic research had practical utility. Commercial utility can take care of itself down the road like it did with photovoltaics. But it doesn't. It's just people figuring out the thermodynamic limits of different theoretical combinations, doing some calculations, and saying we've shown it's possible to have serious power density improvements if we use x and y as base materials. It's rather tiring at this point.
You actually DON'T want academic research to have a practical focus. The valuable stuff is the theoretic stuff. In fact, we need a lot more theory to make batteries more effective. That's one of the big changes starting around 1870. Before then, you could do anything by just futzing around and tinkering. After then, it became increasingly important to have some idea of what you were doing which is why universities became increasingly important.
Batteries are much different from when I was a kid maybe 50 years ago. The kind you put in flashlights seemed to last for half an hour back then. Modern ones seem to last for days even with incandescent bulbs. When I used to cut batteries open, it was rather obvious how they worked from what I found inside. I don't have a clue with modern ones. The important stuff is in the coatings and junctions. The ones with a chip in them are beyond me.
If someone dropped by from the early 20th century, they'd be amazed to see an emerging fleet of electric cars competing with gasoline cars. They'd be amazed by our flashlight batteries, possibly as much as I am. The whole idea of a grid load balancing battery would have been science fiction. Even the dumb little backup battery I have for blackouts would have been amazing. It has an AC plug and lasts for hours.
Judging from the academic papers I've read - OK, sort of scanned but earnestly - we are all going to be amazed by what batteries can do in 20 or 50 years. There are all sorts of fascinating ideas being tried. I have no idea what technology those future batteries will use, but I am sure that we are far from some natural limit like the speed of light.
The gap between battery performance and what would be needed to simply replace fossil fuels is enormous.
As an example, the energy storage of the most powerful Tesla is equivalent to three gallons of gasoline.
Where are you getting your data? I work in energy infrastructure, and batteries have gotten three times as dense (energy/volume) and ~90% cheaper per kilowatt-hour since 2010.
This CleanTechnica article has the graphs, but I can assure you it comes through in practice as well.
https://cleantechnica.com/2020/02/19/bloombergnef-lithium-ion-battery-cell-densities-have-almost-tripled-since-2010/
Thank you for the correction. I was under the impression the improvement was more than 2x. What metric is that for by the way - energy density? I think a lot of the battery R&D done by corporate labs went into things like reducing charge time or better lifetimes.
I've been reading a history of the lithium battery and electric car. Modern battery technology started at Ford with the first battery that actually shoved ions into a solid. There was a paper published. Ford got a patent. This was in the 1960s when Ford had a ton of money and tax laws encouraged research rather than stock buybacks. The first cobalt electrode was developed at Oxford, though the UK atomic energy research group got the credit and royalties on it in the 1970s. The guy who invented it moved from MIT Lincoln Labs when Congress started imposing rules requiring research be useful.
That's all I've read so far. Lithium batteries were brought to market in the 1990s by Japanese video camera manufacturers. They worked out the kinks in their research labs. Those batteries also went into laptop computers. It was another 10 years before they were being used in electric cars. Now they are being used in Ford pickup trucks, so Ford is getting some payback for that research they funded 50+ years ago.
The story involves Bell Lab type industrial research centers, like the one at Ford, universities, research centers, camera manufacturers and probably a lot of other players. From a corporate point of view, the time scale can be daunting. Only governments, non-profits and, perhaps, closely held private corporations can afford the necessary time scale.
Consider solar energy. The photoelectric effect was known in the 19th century. Einstein explained its quantum effects in 1905. Selenium cells were used for light detection early in the 20th century, but modern solar cells are a spin off from integrated circuit technology. The technology was taken over the line by Chinese government's five year plans that increased the manufacturing base and drove down costs. Meanwhile, laboratories around the world are experimenting with amorphous solar power cells that could be much cheaper to make, multi-spectral cells that can exploit a broader range of light, direct to chemical reaction cells that can desalinate water or fix nitrogen and so on. I read papers about this stuff all the time. Which ones are going to define our future? Who the hell knows?
There are a lot of interesting research papers published in academia, and I've found many can be practical. However, there's just too large a gap between what academics do and what industry is doing. Scaling research is an underdeveloped field.
My experience was that it varies a lot by sub-field. Some are practical minded and clearly applicable but often you find the best papers are using grants from corporates anyway. Others are flooded with theory papers proposing things that may be literally impossible to implement. That's ignoring all the fields like social science where replicability is a concern!
Sure. The electrical engineering papers in IEEE are quite interesting research but often lack a direct path to a product. Social science research often has no discernable commercial value.
Hmmm. Underrated point. But I'm still skeptical about the quality of the bulk of that research.
Things to read - I don't know what you'd find convincing. Think about the big green success story of the past 15 years though. It's the huge drop in price of wind and solar energy. Go investigate how that was done and I think you'll find it happened due to constant iterative improvements, lots of small innovations that we don't tend to hear about, and it'll have been driven by on-site R&D efforts by manufacturers. For example a big driven of falling solar prices is better fab tech but the public sector doesn't really do fab research.
Re: your graph. Pretty much anything measured as "fraction of total GDP" is going to be an astronomical sum! But to repeat my point above, an assumption of your graph is that you can precisely measure R&D spend. In reality it's pretty hard to measure or classify work as research/development/other. Companies aren't that interested in making such precise allocations because it's an arbitrary distinction that rarely exists in the real world.
A big part of why Bell Labs hasn't been *visibly* recreated is that modern corp research and development is much better integrated with actual product development than in the past. Newer companies tend not to have siloed research divisions, or if they have one it's not very important. For example, a big reason Google was able to out-execute Microsoft is that Microsoft created a university-style research silo in the form of MSR and hired a bunch of ex-academics to staff it. MSR did better than most such departments at getting things into products but even so, their integration rate was tiny compared to what Google pulled off every day. As a consequence it's hard to figure out how much a company like Google or Apple spends on "research" because it's not a neatly defined bucket, and that's just one company, doesn't even include startups. Go look at their earnings reports and you'll notice that they might easily just report 100% of their engineering salaries spend on R&D, for example.
Solar panels were another thing developed by Bell Labs! I guess if you want to consider Bell Labs as existing due to government support then sure, but it's not actually a part of the government. Governments subsidized it but the heavy lifting here was all done by the private sector.
If you want to represent a normal sum of money use dollars. Nobody represents small amounts of money as a tiny fraction of GDP. The whole point is that GDP is enormous.
The first solar panel was invented in the late 1800s by a guy named Fritts actually.
The whole "Government or Private" argument is a bit ridiculous generally. It's both and always has been. Is the iPhone a commercial product? What makes it useful? (Touchscreen, GPS, Internet - all directly government funded)
Professor Scott Galloway had the figures in his "Welfare Queens" post that argued the U.S. government is the most successful venture capital firm in history.
https://www.profgalloway.com/welfare-queens/
The $190 billion in U.S. research funding in 2019 was apportioned by government making the plurality of funding, 43%, followed by business, 31%, nonprofits, 14%, and universities, 13%. Much of this is government transferring its 43% to collaborators in the other three sectors, so the work is symbiotic.
Excellent but some observations.
The transistor is not basic research like Cosmology. It was based on basic research done a half century earlier by Bohr, Plank and Einstein.
Some misses. The greatest industrial research center in America for 200 years was the Dupont Experimental station. And they very much capitalized!! Nylon. Gunpowder. Lycra. Synthetic fibers like polyester. As well as Agriculture.
Sarnoff labs... TV and Radio development but not the basic research... Marconi et al
GE Schenectady- plastics basic research for wire insulation led to polycarbonate and others. Electrical..well developments not really basic
Still Bell Labs was the premiere research organization of the world.
Bell labs invented fiber optics. In the late 50s. Used as a super top secret sensor deep underwater for detection of submarines. The laser also needed in that scheme they coinvented. But the basic research for both was based on basic optics of Newton and Maxwell, and Einstein.
https://www.pbs.org/transistor/science/info/qmsemi.html
Well, the foundation of quantum mechanics was established by Bohr and Plank. You perhaps underestimate Einstein contribution to quantum mechanics, and the quanta of light. He stood alone in that. It's called an LED ..a semiconductor today. He also developed quanta statistics and thermodynamics with Bose.
Wigner gets credit. Heisenberg none really. It was Einsteins work that led to and predicted stimulated emissions. Called the laser.
https://blogs.scientificamerican.com/observations/einstein-and-the-quantum/
I worked at AT&T Labs/Research for several years as a technical staff member, way back before I went to get my PhD in computer science. (For the uninitiated: AT&T Labs was a spinoff of Bell Labs that took a fraction of the computer science research from Bell, leaving the rest with Lucent.) Most of the people I worked with were old Bell Labs hands.
From my perspective, the main benefit of a large research lab was the huge concentration of brilliant researchers. I'm now a CS professor and love my lab, but I only have a small number of local collaborators. At AT&T you could have lunch with Peter Shor and Bjarne Stroustrup as well as a dozen other subject-matter experts. This produced an amazing number of good new ideas, which resulted in an impressive amount of research productivity. If you put a bunch of smart researchers in that environment I imagine they're going to thrive, as long as you lay the groundwork for (1) those people to work on problems that are relevant to the organization, and (2) you have a plan to deploy and monetize that work.
The real problem with AT&T is that by the time I worked there (circa 2000) it had absolutely zero ability to monetize most of the research it produced. Silicon Valley was in the grips of the first dot-com boom and firms on the West Coast were doing relatively amazing things. AT&T was hidebound and constantly tossing good research on the floor. The few examples where they did attempt to monetize were even more embarrassing: I worked on a software project trying to sell music over the Internet, and it was obvious that we were completely outclassed in this effort (Apple eventually licensed some of the audio compression patents AT&T owned, and the rest is history.) In principle we could have been a hotbed incubator for startups, but: (1) we were in New Jersey and not Silicon Valley, and (2) management didn't really know how to handle this. Even a "Intellectual Ventures"-style patent strategy might have been lucrative, but AT&T did not execute that. Thankfully.
This touches on something Noah misses. Its not just that universities do research on cosmology (there is lots of application focused research, just not necessarily in the physics/astronomy department) or that they tend to be siloed.
The biggest difference in shifting research from corporate labs to universities is the people doing the research! You go from trained professionals to kids that dont have a clue what theyre doing. I have a PhD and work in corporate R&D. I am much better at research now than I was during my PhD, where it often takes 3 years to figure out what the heck youre doing.
Yeah, but you also go from kids willing to put in 100 hrs a week for subsistence wages to adults who have an actual life they would like to support, and often wish to see their families to boot.
I know that's not what employers expect, but it seems that once people develop a lot of expertise and experience, they start to feel that way.
Some observations from someone who spent some time in academic research and industry (pharma regulatory affairs, not R&D just to be clear). A good summary of the academic/industrial nexus can be found in the symposium proceedings held at U Penn back in the early 1980s, "Partners in the Research Enterprise: University-Corporate Relations in Science and Technology." There were speakers from both sectors and it was really an all star cast with then Congressman Gore (pre-Internet invention), Kenneth Arrow, Bart Giamatti (pre-Baseball Commish), Jim Wyngaarden (NIH Director) and others. There were some very good case studies presented including an intriguing agreement between Du Pont and Notre Dame back in the late 1920s that goes to show these types of agreements are not new.
Bell Labs is a special case (and one of the things Noah did not mention was ground breaking work in computing, the 'C' programming language was developed there). Ma Bell had a monopoly and had both the research arm, Bell Labs, and the manufacturing arm, Western Electric. They could afford to sponsor a lot of basic research as they did not face the type of competition that other businesses were subject to.
Google has a huge market share with a cash flow that allows them to carry out a lot of fundamental research such as in AI as noted by Noah. They also do a bunch of other stuff that is not necessarily related to their core business. Along with other software companies such as Microsoft, this is perhaps the best version of the industrial research lab that Brad De Long writes about in his recently published book, 'Slouching Towards Utopia.'
This is not to say that all corporate R&D is robust. In the sector that I worked in, biopharma, there has been a shift away from corporate R&D to direct purchase via merger and acquisition. It was more cost effective for a pharma company to purchase a competitor that might have a blockbuster drug as the cash flow from sales quickly covers the acquisition cost (Pfizer were among the first to do this big time). The same thing continues to go on with the purchase of biotech assets that have originated in academia, spun out into a company that got early stage VC funding and then subsumed into a pharma company. Since the biotech revolution began in the 1970s, many companies (I don't have a good number but it's well over 1000) have been founded but only a small number remain independent (Amgen, Gilead, and Biogen), the rest either stumble along, get acquired or go out of business.
Yes, as Noah points out the energy sector is one area that might profit from a better R&D model but it's mainly in the distribution and storage areas. Alternative energy source production seems to be going along just fine under the current system.
In sum, there are already adequate measures in place that facilitate technology transfer. Occasionally, a need to do something above and beyond comes up such as the COVID-19 pandemic that required massive amounts of money and effort to develop both a vaccine and therapeutics. The first of these was accomplished but the second, not so much. There is a continued need to look at lessons learned.
Great article as always. I personally don't think Bell Labs will make a comeback. I like to err on the realistic side and a lot of the tailwinds that enabled Bell Labs are gone.
I have only two contributions. First, is it possible that for the last thirty years, we have been operating with a faulty model of innovation. These days, we glorify the young novice and the talented upcomer. What is first principles thinking after all apart from a glorification of the advantages of the amateur. But it seems to me that we have confused one field ( Software where that isn't a problem and could even be a boon) with everything else, where real expertise is necessary and the gap between research and development is lengthy, and the marginal cost of distribution is significant.
Of course, this ties in to venture capital too, which is uniquely suited for ICT technology and not so apposite for a lot of other industries. The world of bits and bytes has had quite a lot of innovation. The world of atoms and stuff ( a funny contrast since everything is still atoms and stuff), not so much.
The biggest R&D innovation for humanity happened two years ago (the vaccine), but the side effect was it pushed antivaxx, conspiracy fantasy, and fashoid politics into a competitive mainstream worldview.
The coronavirus vaccine was perhaps the most transformative scientific event since mass adoption of the internet (1990s) and the moon landing (1960s).
The bizarre flipflop of political sides during the event was funny.
Before vaccine rollout:
Left: "You can't safely develop a vaccine in that short a time. I sure as hell won't try it."
Right: "We can do it! Donald Trump can make it happen!"
After vaccine rollout:
Right: "You can't safely develop a vaccine in that short a time. I sure as hell won't try it."
Left: "We did it! Science made it happen!"
It's more complicated than that, though.
The vaccine rollout happened in a window of time after the election but before the insurrection and Biden transition. That was the time when the vaccine was rolled out initially only to seniors and high health risk groups. Both Trump and Biden received the first of the vaccines, both were eligible because of age and national security concerns.
Operation Warp Speed was entirely under Trump's tenure. It could have been one of the few genuinely successful policy initiatives Trump could have claimed credit for, but the rollout to the general public happened in Biden's first months in office.
Before the vaccine, though, America was an utter shitshow and it would be a prelude to how slipshod our vaccination efforts turned out to be. And initially, vaccinations weren't necessarily polarized. Political party wasn't a predictor of vaccination or hesitancy. Among Republicans, age was a bigger determinant. Voters who had the lived memory of the polio scare of the 1950s were more likely to get the vaccine. Among Democrats, the lowest vaccination uptake was going to be among Blacks and Latinos. The hesitancy rate among Latinos was higher than expected.
Very true. I was deeply surprised at the reception towards those vaccines. Yes, the media could have done a better job at painting a holistic story. But without that vaccine, we are still in a full-blown pandemic. It was a remarkable achievement on all fronts and it illustrates the nadir our societies have reached: fake innovation is praised to the high heavens while real innovation is ignored or disparaged.
Bell Labs kind of sucked.
They had an outsized impact on R&D largely by exploiting AT&T's government-backed monopoly on communication systems. If you wanted to work on (say) fiber optic communications, there was only one place to work. If you talk to old timers they love to talk about how great it was that they got paid to sit around and have lunch with Nobel laureates, but they never seem to recognize how overall bad it was for society to have all those productive researchers locked into an organization that had no incentive to actually ship anything.
I worked for DOE supported national labs in the 90s and early 2000s -_ Sandia and Lawrence Livermore. These were products of WW2. The former was managed by Bell Labs. These places were curious in part because you needed a security clearance to work there and in part because of their traditions. On one hand when I worked at these places they were quite bureaucratic. One the other hand scientists greatly resented government oversight of their work despite the fact that almost all the funding came from government. One would hear that once these places were run by scientists for scientists. Or that if you want nuclear weapons you need to give us whaterver funding we need. I always presumed that during WW2 things could not have worked this way. Sometimes the powers that be just need to know when to get out of the way and let folks do their work.
There is a joke that went around about the 3 nuclear weapons labs -- Sandia, Livermore and Los Alamos. It says alot. It goes like this. When DOE says jump Sandia answers how high sir. Los Alamos says Up Yours. And Livermore in 3 months puts together a $500 million jump management program.
It would be interesting to know how much research funding is being spent on research of persuasion, broadly. I suspect an increasing amount. It's cheaper than a physical lab of course but I see the persuasion ecosystem -- buy this, don't eat that, do this and you'll live an extra 25 years, ask your doctor about this -- ballooning in my lifetime, a good deal of it grant funded, when it comes to public health. But private food consumption persuasion is also big business and there are apps for everything. None of this will build us underwater cities or floating airports.
The US total marketing spend in 2021 was about 224 Billion USD, or almost 1% of GDP. All marketing is essentially "funding being spent on research of persuasion".
It has become glaringly obvious that the persuasion industry is perhaps the industry most destructive to humanity but still legal, a topic widely explored in science fiction.
The pharmaceutical industry additionally exploits the field of patent law research and development, which, like persuasion, features a much better ROI than basic pharmaceutical research these days. The failure rate of drugs in the research pipeline and the increasingly enormous cost involved forces the direction of actual pharmacological research to be slight but patentable modifications to an existing drug which can then be marketed as the new improved version of an old drug which is nearing the end of its patent life. No real improvement in clinical use, but no financial risk either.
Conversely, profits on generic drugs which are off patent are too low to allow companies any research at all.
Standard Oil was perhaps the original Energy Bell.
I worked at Bell Labs from 1969 to 1973 and the Xerox PARC from 1973-1991.
Both entities were derived from monopoly.
Bell Labs was part of the ATT supply chain which was disassembled by anti-trust laws and the world was in disarray from WWII.
Xerox failed to thrive due to managerial incompetence and a Corporate culture derived from the auto industry that could not convert PARC invention into business.
I worked at RCA labs for a summer in 1961. RCA failed because management made a lot of bad choices and bet the farm on them. GaAs vs Si; Capacitive Electronic Disk, and more....
Currently the US does a lot of research but no longer invests in developments that can take decades. That is now done by others outside of the US.
There is no economic basis on recreating the ATT supply chain of which Bell Labs was only one part.
“New” transmission lines should be a non-starter for multiple reason. To name just a handful:
- stringing transmissions lines about be hundreds of miles of climates-change-induced desiccated forest when high-wind events are now common is a recipe for more wildfithe amount of energy lost via transmission of electricity over long distances is a waste we can’t afford.
- improvement in battery technologies (silicon anode for 100% more energy density; Brakeflow technology to prevent fires when Li-ion batteries short-circuit) eliminates the problem of intermittency for solar and wind farms, as well as danger of fires, and electricity lost via long-distance transmission.
- thousands of miles of transmission lines is a waste of metals (copper, steel, etc.) that should be devoted to substations (wind- and solar-energy farms near major metropolitan areas).
As with the trend in many other business sectors, you move the product/service closer to areas of large populations. To do otherwise is a waste of money, personnel, metals in tight supply, etc. — all to purchase a public safety problem (fire) and more-expensive conventional energy.
>>The inflationary theory of cosmology and the invention of the transistor are both “basic research”, but their economic value is very very different.
It's important to note, though, that both sprang from Einstein's work on quantum mechanics and relativity, as well as the work of Volta and Tesla and others. We just don't get to have transistors without a bunch of people actually doing the more "useless" kind of basic research. So, if we're building a model of economic value of innovation, we have to not only factor in the proximate invention of a technology, but the more distal innovations as well.
That said, the distal innovations are still probably a relatively low proportion of the "Most-Optimal Gross National Investment Portfolio for Innovation" that we're essentially trying to formulate here. But the vast majority of academic research money isn't going into wacky topics like the evolution of duck genitalia; no, it's going to worthwhile projects.
IOW, Noah, I'm saying we don't really have a maldistribution problem within *topics* of basic research, we have a maldistribution between the more innovation-distal academic research *institutions* and the more valuable innovation-proximal private research *institutions*. And that maldistribution isn't driven by the federal government overallocating to the former and ignoring the latter, it's driven by the private sphere's abject failure to keep investing in their institutions. Which itself is, obviously, a result of an overly financialized economy and badly-counterproductive-to-the-point-of-bordering-on-evil philosophies like shareholder value. Moreover, even during the recent tech boom, all that ZIRP-fueled VC money was notoriously *not* going to your deep-innovation super-startups. It's clear that the private sector can't be trusted to invest in innovation. Like, ever.
I recognize that shareholder value and overfinancialization are complex problems that simply can't be solved with some handwaving and my own pretty rhetoric, but I think there's value in at least diagnosing the problem correctly. At the end of the day, I think we still agree that it's unlikely Bell would be replicated. But I think where we diverge is that I don't think we should necessarily focus on defining all innovation-proximal sectors as public utilities and then creating utility monopolies to serve them as the "One Cool Trick" that will magically make the next Bell for every industry. In fact, it kind of sounds like a fast-track to dystopia.
We're probably better served by working on the hard problems like shareholder value and overfinancialization, and then championing public policies that will simply incubate and empower deep-innovation super-startups to come up with the next innovative business model that, when combined with the right confluence of other circumstances, *will* create another generation of Bells.
Interesting piece. I would argue that another approach might be what the Obama administration tried to do through the SBA: the Regional Innovation Clusters. The idea was to link regional universities and anchor large firms with the regional small and medium-size business ecosystem around a specific set of technologies (e.g., polymers, drones, wood products, logistics) in which it had a comparative (often historical) advantage. This would improve the pipeline of talent, funnel basic research into the market, and facilitate collaborations and client acquisitions (often at different stages of the supply chain).
The evaluations done of this initiative were quite promising and this wasn't a costly endeavor, relative to other actions the Federal government can take (and has taken).
Thanks Noah. I started my career at Bell Labs and worked at Xerox PARC as a grad student. They were great places in their day. I remember one morning at PARC I was struggling with the coffee machine and a nice man came over and showed me how to use it. It turned out he was one of the top physicists in the world.
I gifted a subscription to a very smart friend. His comment was, "He [that's you] makes complex issues easy to understand. "
An orthogonal argument to why the U.S. will likely never see a return of large, centralized for-profit research divisions is that they do not generate a sufficient ROI to shareholders. As cited above, the vast majority of the profits generated by the invention of the transistor did not return to the Bell system, but to downstream companies that successfully monitored the inventions. Similarly with Xerox PARC where Alan Kay invented the modern windowed system. It is a business case issue: the Bell system was in the business of telephone calls, not transistors or the eventual development of microprocessors. Xerox was in the business of selling copiers, not mini or microcomputers. A further example is Kodak, in the business of selling film, couldn’t successfully monetize digital even though they were technologically ahead of Japan, who did manufacture the cameras.
Which leads to a different question about the technological ecosystem. Would these trajectories have been different if the U.S. had a coherent national industrial policy, such as Japan’s MITI, in the latter half of the 20th century?
Many of my friend’s dads growing up were Bell Labs scientists, others worked for big pharma companies HQ’d in the same area of NJ.
Research funding for universities certainly made a huge difference and provided a career option for scientists interested in research.
However, it was the shift toward shareholder value and lower interest rates and inflation starting in the 1980s that really drove the shift. Lots of scientists and R&D depts got laid off. Shareholders didn’t want to pay for expensive, risky bets. I worked at a chemical company where we had a 15 percent IRR threshold for projects back when inflation was 5 percent and interest rates were 8 percent. In the 90s as rates and inflation fell even lower, our IRR threshold was bumped up closer to 20 percent. Completely non-sensical from a CAPM point of view unless we wanted to take massive risk (then again, equity index investors think they will get 15 percent returns in perpetuity while the risk free rate is 2 percent - all part of the same insanity). Of course, my company didn’t want to take massive investment risks. Instead we invested less or went after the low hanging fruits of cost cuts and outsourcing (and later, share repurchases).
Fortunately, VC and PE in some fields have sponsored innovation. In pharma, the big payoff is still a new drug. In tech, it is largely figuring out how to steal and monetize personal information and sell more ads.
Google has invested a tonne, though - most of it wasted and wasteful. University research is mostly a waste as well and a fairly corrupt/incestuous system to boot.
Also remember in the 1980s that Japan was in fashion - Japan was about continual improvement and leveraging/implementing others IP rather than creating from scratch.