Why do “Business Development Companies” exist ? Why don't businesses just get their loans from banks ? I'll guess it's because BDC's will make loans that Banks won't? But if that's it, why do banks make loans to companies that make loans that by the bank's standards are too risky ?
BDC arent really more risky than anything else they are really just a tax wrapper. For a big Limited Partner “LP” i.e. insurance company, pension fund, SWF, depending on your tax status and jurisdiction using a BDC might be the most tax efficient ‘structure’ to access the private debt market. There are a few other structures out there, but the effect is the same: the LP gives money (in a fund structure, BDC or otherwise) to the General Partner (“GP” - i.e. Apollo, Ares, Blackstone, and a bunch of others) and GP goes and issued debt to data centers, small or large business, real estate, etc.
The one thing I think Noah didn’t really point out is that the above structure are mostly financed with equity—as opposed to 2008 where the loans were financed with bank balance sheet leverage—or at least show us some statistics. I haven’t seen statistics to this effect in a while (so i might be mistaken) but post-Dodd Frank we made it really expensive for banks to hold debt on their balance sheet which is what has driven the growth of the private credit market and despite that growth leverage in our overall financial system has come down.
Is it financed with equity, or is it just that the banks don't end up holding much of the paper (and the risk).
As I understand it, unlike 2008, this time banks aren't holding the risk, mostly just insurance companies pension funds, and other non-bank institutional investors. So instead of bank balance sheet problems, we'd end up with insurance company and pension fund balance sheet problems.
From a macro/systemic standpoint that means it plays out differently than if banks were also failing. This time the culprits will likely be counterparty risks, unfunded pensions, state guarantee funds fail, insurers cutting back underwriting especially for higher risk lines/markets, etc).
I just barely understand this stuff, so maybe I've got it all wrong.
The pension funds who hold this have 100% to 50%-ish equity backing the loans. It’s hard to see a situation (unlike banks) where they would be forced to fire sale assets.
Banks still do originate debt then syndicate it out to pension funds, insurance companies, etc. But I believe (i havent seen the stats in a while) there is a lot more equity in the system overall than 2008 when bank balance sheets were ~30x levered and now they are about half that or less.
Insurance companies went through their own Dodd-Frank-esque regulation as well (Solvency II) which made them quite a bit safer. And bear in mind insurance companies are much better places to do this kind of lending than banks because the duration of their liabilities is much longer (think about your long-term life insurance policies and annunities) than banks, which have extremely short duration liabilities (like SVB).
Your typical life insurance company has 8-10% in their Alts bucket, less for P&C. Maybe half of that 8%-10% is private debt with the other being private equity. The remaining 90% is investment grade bonds. Insurance regulation isn’t perfect but i for the most part the balance sheets are in good shape.
I should say the insurance regulation that most gives me heartburn is the move to domicile, or reinsure, your liabilities in Bermuda. Bermuda is good but have a bit more lenient treatment of asset-backed securities including CLOs. The US and Europe give reciprocity to Bermuda regulation so you see a lot of folks moving there. Maybe juices ROE for the insurance companies ~100bps or so.
What will happen to the debt portion of this AI 'boom' when one of these tech giants has a major breakthrough development that gives it clear market dominance? It seems like a repeat of the dot com boom, where only three titans remained standing on the battlefield.
Is this, as Noah indicates, that if most of the AI funding is non-debt, then only the share price of the losers will likely take a hit? But that if they go more heavily into borrowing, that victory for one will severely damage the others AND the credit market?
Is it moral hazard? If the downside scenario involves systemic risks to the economy (because everyone else is doing it) they calculate that they will be bailed out to some extent in the event of?
The other question is whether, as time goes on, the applications of generative AI are sufficiently compelling to generate the revenue to pay back all this capital expenditure.
Clearly, there will be revenue-generating applications. I'm not yet convinced they are anywhere near big enough in the medium term to justify the trillions being spent.
I think it could parallel the telecom boom where the raw infrastructure (eg AI engines) becomes somewhat commoditized and low margin from all the engines having to compete hard for applications to pick them to ride on top of.
It's unclear how much of AI use/revenue will be people directly interfacing w/AIs versus how much will be third party applications linking to AI engines through APIs
The thing about the telecom boom is that everybody at the time thought it's the telecom companies that will big - when ultimately, they ended up as commotitized infrastructure providers to those firms who really made the big buck (today's big tech).
So the question is if we could see a similar development this time, too. Big tech spending big, but others reaping the rewards. Seems still a remote possibility right now, but not impossible.
>Seems still a remote possibility right now, but not impossible.
Allow me to play devil's advocate and argue that commoditization is actually the default for LLMs.
It's already easy to write your app using a tool like litellm or OpenRouter so that switching between different LLM providers doesn't require any code rewrite.
Take this a little further, imagine that as the industry matures, companies will *automatically* switch their preferred LLM provider on a weekly or even daily basis, depending on which provider provides acceptable performance at the lowest cost. (Note that evals let you auto-benchmark what constitutes "acceptable performance", practically for free.)
Along *certain* dimensions, I am tempted to argue that there has *never* been an industry as easily commoditized as the LLM industry. Switching costs could become absurdly low. At least for real-life physical commodities, quality evaluations can be nontrivial, and it's cheaper to deal with the commodity provider whose physical location is closest to yours!
This argument could fail if certain LLM providers have "secret sauce", in the sense that their LLM can do tricks which competing LLMs cannot. But such "secret sauce" hasn't proven particularly durable thus far. And how many queries will actually require the "secret sauce" at any given time? In the long run, I expect to see companies automatically evaluate query complexity using tools like Flesch-Kincaid readability, then send the query to the cheap "commoditized" LLM vs the expensive "secret sauce" LLM depending on assessed complexity. And the net effect of progress in the field, over time, will be to shift more and more queries over to the cheap "commoditized" LLM.
So yeah, I think there's a solid case that this plays out very similar to telecom. AI investors could easily lose their shirts.
Look, it's hard to know, but it feels like we're still at the stage of the personal computer revolution of the early 1980s where people in their garage were building things in a few weekends, except that there is no competitive moat in a bit of prompt engineering and perhaps some model fine-tuning.
The massive fortunes in tech have been built on vendor lock-in and network effects, and there haven't been any obvious examples of either thus far.
The whole AI boom is from vendor lock-in. That's why Nvidia's market cap is now $4.2 trillion. They have some competition but it's difficult to switch. TSMC has a monopoly on the manufacturing of AI hardware and many of their suppliers also have monopolies. There's 3 companies (Amazon, Google Microsoft) that own almost all the GPUs that companies use to run their models. Since so many companies are already locked in to the Microsoft ecosystem, it's easier for them to use Microsoft's AI on all their Microsoft Word, Excel, PowerPoint, SharePoint, & Teams data.
Yes, but Nvidia are designing and selling shovels, TSMC are contract shovel manufacturers, and Amazon, Google and Microsoft are buying the shovels. What’s not clear is whether anyone is finding enough gold to justify all this expenditure on shovels in the first place.
I think it’s current, not credit, that will hamstring the AI data center buildout. When residential electrical rates spike and/or utilities are forced into inducing rolling blackouts because of data center demand, push will come to shove. And this is to say nothing of annually wasting billions of gallons of fresh water. Not a good scenario for future years of climate change-induced record heatwaves and extreme/extended droughts.
AI is one thing, but we live on a physical planet. We simply can’t afford to exponentially increase electricity and fresh-water consumption. The Big Tech behemoths and small business and the public are on a collision course. If and when push comes to shove, who pays what for electricity, and who gets to keep the lights on and air-conditioner running?
China overbuilt data centers, which it now hopes to turn into vendors. When the world’s attention is focused on the real estate crisis in China, it’s rolling out incredible solar power systems at scale.
Well, many of the data centers will use their own nuclear power plants (including restarting TMI in PA) and luckily the amount of power that can be extracted from uranium or thorium are so great that we are nowhere near stressing the planetary resources. Also, solar power doesn’t require continuous use of planetary resources either.
As for fresh water, that is not a requirement for data centers, they require water for cooling, but reclaimed water works fine, and in any case the cooling merely evaporates the water into vapor, it doesn’t use it up or make it dirty.
Water that has evaporated is lost to the cooling system and lost for other uses (agriculture, domestic or industrial). It may come back to earth as rain, elsewhere on land or, more likely, in the ocean. But it is "consumed" so far as use by humans is concerned.
As I understand it, data centers in areas with water scarcity can/are using closed loop systems that don't rely on evaporation.
The water issue is more regulatory than anything else. Solutions exist and they're not even that expensive. OTOH, if the DC operator isn't forced to, they'll go with cheaper evaporative cooling solution.
That said, water for cooling thermal power plants is another matter.
I just read an article about Texas data centers causing water restrictions for residents. The water is a huge issue and they are building them in a lot of drought prone areas in the south. Sorry I can’t remember ever it was I read it
Which continues to baffle me, because it’s much cheaper to build fiber infrastructure to carry the network where the power is than it is to build electrical infrastructure (let alone the ridiculous cost of new nuclear) to carry the power to where the data centers are currently being built. There are a few issues like latency but they shouldn’t be driving this kind of development.
The water use by the data centers is actually pretty small. I’ve seen projections that in about 5 years it might be comparable to the amount of water that humans drink. But that’s still pretty small compared to household use, which is small compared to industrial use, which is tiny compared to agricultural use.
It’s worth paying attention to 1-2% increases of water use, but it’s not worth getting angry about, since it can easily be fixed by raising prices so that whatever uses aren’t valuable get curtailed to send more water to the more valuable uses.
We've heard the 'scarcity is really a pricing problem' argument before - applied to what was then called peak oil. The argument fails when applied to a public good necessary for biological life. Who gets to decide which uses aren't valuable? In a market economy, those with ability to spend. Then the poor won't be able to afford water, or food. In that logic, it is then their lives that are not valuable.
If water were to skyrocket in price, by a factor of 5, poor people would still be able to afford water, though it might pinch some other parts of their budget a bit.
It's not that big of a deal if poor people who choose to live in the desert can't afford to use thousands of gallons per week watering their lawn. If they want to have a lawn, they can move away from the desert. We also don't need to grow alfalfa in the desert. You can buy your feed for pigs & cows from parts of the country that get enough rain. None of this is a crisis.
First, it’s generally not a good idea to think you know how someone feels about an issue. Confusing, deliberately or not, disappointment and skepticism with “angry” is a rhetorical prop. Second, why would anybody trust projections of water usage and waste from biased sources? For example, when Google wanted to build a server farm in my state, it required that its power consumption was not for public consumption. Increase water rates because poor people already don’t have enough trouble paying utility rates. Some areas near data centers have already lost access to potable water. Xai’s deisel generators are making the air unbreathable in some poor neighborhoods.
You didn’t identify your source for projections. I cite my sources that use empirical data. I’m skeptical of “projections.” I don’t have time to do somebody else’s due diligence. There is no shortage of studies and white papers. I’m out of time to spend on vague projections. Best of luck.
With the railroads, it was two or three times. With the telecom companies it was once. The tech companies haven’t done it yet, so maybe it will be once.
This is the knee-jerk reaction, as if deploying more nuclear power/nuclear proliferation is the magic wand that will solve the energy issue. Nuclear history is only a bit more than 75 years, and there have been plenty of accidents and irresponsible storage of nuclear waste. Hanford’s buried nuclear waste is leaching toward the Columbia River with nothing to stop it. If I used 75 years as a control to extrapolate the guaranteed safety of nuclear waste (half life of 225,000 years), I’d get laughed out of the lab. No, wait. Because it’s nuclear waste, it gets a pass. The Earth’s crust, as we know is fragile and under tremendous pressure from tectonic plates induction zone. The Earth annually experiences 5,000 earthquakes. Please show me and map and timeline if 235,000 years indicating where earthquakes will occur and their intensity. Perhaps the nuclear power industry has Nostradamus on the payroll. Look at how many nuclear power plants are built right on the “Pacific Ring of Fire.” Fukushima is one of many. The power plants near San Lois Obispo has had numerous safety problems and sits on the San Andreas Fault — one of the fastest-moving geological faults in the world. So, when it came up for renewal of its operating license, and after testimony documenting its history of close calls and safety violations, what was its response? It applied for another 25-years operating license.
Very smart people want to rush headlong into building more nuclear power plants. The big-tech behemoths of Silicon Valley want to build modular nuclear power units on-site at data centers. There is no way in hell these companies should entrusted with nuclear materials. Creating multiple soft targets for future terrorists is insane.
We’re still struggling to clean up past nuclear-waste mistakes. Last week, Fukushima had to stop pumping “treated” nuclear waste into the ocean because of a tsunami warning. Fukushima is still “cleaning up” after the previous tsunami. Pumping “treated” nuclear waste into the ocean is a good example of the cynical environmental lab phrase: the solution to pollution is dilution”:
“ . . . the Department of Energy began cleaning up the site in 1996. But the process has dragged on well past its initially projected completion date. Officials now say that cleanup activities will be complete by 2065.”
<insert power source> history is only a bit more than 75 years, and there have been plenty of accidents and irresponsible storage of <insert power source> waste.
Seismologists also study the Ring of Fire, as more than 80% of earthquakes with a magnitude of 8.0 or higher have occurred there. And other scientists approve of building nuclear power plants on the Pacific Ring of Fire. Again, very smart people deciding what the margin of safety should be for generations to come.
If the power grid in unable to supply sufficient electricity, which it currently is not, will the AI productivity promises fail and the loans come due without the ability to pay them off? By hamstringing the development of renewable energy sources which can come online much faster than gas fired electrical plants, Trump may have planted the seeds of the next economic collapse.
There was an Odd Lots podcast episode on private credit a while back (maybe last summer) and the thing I took away from it was that private credit is crazy complicated and there seems to be a lot of gamesmanship in the dealmaking around things like seniority, covenants, ability to restructure, etc. It struck me that analyzing a deal seemed to require as many lawyers as finance people (maybe more lawyers).
I came away from that episode viewing private credit as sort of the credit market cousin of SPACS, except instead of retail, this time the marks are the ultra-rich and institutional investors.
$150 billion of banking industry exposure to Private Credit Funds is a fraction of the assets on bank balance sheets. Not of a size, even if it all were uncollectible, to cause a banking crisis. But….given Private Credit’s role in providing new term credit to small and medium sized businesses and the banks slow retreat from same, the health of private credit is important to credit intermediation away from the commanding heights of the US domestic economy. Definitely worth watching for strains in that sector.
I've read arguments that LLMs are the first step on the road to AGI and investing in them heavily will get us to it soon. I've also read arguments that LLMs are probably a dead end that will plateau in usefulness soon. Between those two ends of the spectrum there are more moderate arguments that it will be an important technology, but not lead to AGI.
I am not learned enough on this topic to have a strong opinion on which argument is closer to being correct. It seems like if the first one is right the data centers won't cause a crash, but if the second on is they almost certainly will. Even if the first argument is right and LLMs do lead to AGI, there could still be a bubble and a crash if it takes somewhat longer than expected. It certainly seems like a risky investment, although I suppose that if it pays off, it will really pay off.
I don’t think LLMs will directly lead to AGI. Even though they are very powerful now, at the end of the day they are still just predicting the next token in a sequence. This idea of “next-token prediction” has been around for a long time. Transformers made it work at scale, but the core idea didn’t really change. LLMs can look like they are thinking or planning, but that’s mostly because they’ve seen a lot of text and learned patterns from it. They don’t actually have goals, or understand what they’re saying, or think ahead like humans do. They don’t build a model of the world in their head or reason about consequences.
One can argue that humans are also “prediction machines,” and I get that point because our brains are always trying to guess what’s coming next. But I feel like human thinking is still very different. We can plan, adapt, and understand things deeply, not just copy patterns from data. LLMs are getting more advanced with tools, memory, and retrieval, so maybe they will get closer to AGI over time. But I don’t think next-token prediction alone is enough for true intelligence. It’s an important part, but not the full picture.
The current chatbot assistants only have an LLM as one layer. The next word prediction functions to give it the ability to generate sentences on any topic in any context. But it also has a separate layer of training that tells it to focus on sentences that humans find helpful in an assistant (which is why everyone paid so much more attention in December 2022, when ChatGPT was released, than they did in June 2020, when GPT 3 was released). And then in the past year they’ve started a whole new layer of training where the systems practice using their words to find solutions to reasoning problems - they’re no longer just predicting what a person would say, but are using words in patterns that have been found to effectively lead to solutions to problems.
I agree with you modern language models are not just doing "next word prediction" anymore. In the paper, they show that Claude plans ahead when writing poetry. For example, it chooses the rhyme word like "rabbit" first, then builds the sentence to reach that word. Also, when doing math like 36 + 59, Claude uses different paths together one to guess the answer roughly, and another to get the exact digits. It’s not just copying training data, it’s really solving the task in its own way. But still, I don’t think this is AGI like human intelligence. These models don’t have real goals, no experience from the real world, and no true understanding. Even when it looks like the model is planning or thinking, it is just very smart pattern prediction, trained on huge text datasets. It is not thinking like humans with memory, emotions, or awareness.
At the same time, this becomes a bit philosophical. If humans are also just learning and repeating complex patterns from life, then maybe the line between machine and human intelligence is not so clear. But for now, I believe there is still a big difference. These models are powerful and impressive, but they are not real minds or conscious systems.
Yeah, it's absolutely not AGI, and I don't mean to claim that it's the same as human intelligence. All I mean to say is that it's using several strategies, and learning how to use them effectively towards its goals, rather than just copying human text. I suspect that at least some of the things going on in human minds work roughly similarly to what's going on in each of the types of systems that are in the chatbots at this point. I also conjecture that human intelligence is just a bunch more of these types of things - nothing super-special that couldn't be achieved over a few decades of these sorts of developments.
But I also don't think that working like humans is going to be sufficient for what people are imagining out of AGI - I don't think humans are truly general intelligences either, and I suspect that general intelligence is actually impossible, and that all intelligences are just specialized in different ways.
If interested, Gary Marcus has some very good analysis/content related to limitations of LLMs/GenAI, and the need for other AI techniques to reach AGI : https://substack.com/@garymarcus/posts
You should actually look at the numbers on that site. It looks like the biggest user there is Google, with about 5 billion gallons of water per year.
For comparison, a corn farm uses about 1000 acre feet of water per square mile. An acre foot is about 300,000 gallons. So Microsoft is using about as much water as 15 square miles of corn.
That’s obviously significant - I think it’s about comparable to the water use of a city like Los Angeles or Chicago. But it’s still pretty small on the scale of agricultural users of water, which is where most water goes.
Machine learning has been used for a very long time in science and drug discovery.
I’m not sure the chat bots
are a step up from bespoke applications, but you are right - now everyone can have access to AI on the cheap rather than having to fund and develop their own app.
a) Bank prudential regulators who ought to be looking a systemic risk, how much of bank assets are tied to one sector (AI) and potentially say "no"
b) The Fed who should be ready (unlike 2008) to "do what it takes" to prevent a financial crisis of AI lending from keeping inflation on (or at the moment above) target (as it came close to doing in 2020) and making sure everyone knows that it is going to do so.
In the graph it shows lots of data center spending by Microsoft, Meta, Amazon and Google, and all of these are working on LLMs except Microsoft, which has a deal with OpenAI, but is hosting for other companies as well as OpenAI.
Where is the spending by others, such as SoftBank who claim to be ready to spend half a $trillion partnering with OpenAI on StarGate? What above Xai which built the largest and quickest data centers in Memphis and wants to expand greatly? Tesla is also building data centers for its driving and robot AI. Isn’t Oracle spending also, and Apple must be building centers for its private AI cloud.
I wonder how much spending is also hidden. A mysterious group called Blue is trying to get a data center approved in my area, and the local government don’t really know who they represent and won’t until the power and (reclaimed) water supplies are approved.
I was expecting your next post to be about the BLS and how we can be expecting record grain harvests this year…
Dumb questions alert...
Why do “Business Development Companies” exist ? Why don't businesses just get their loans from banks ? I'll guess it's because BDC's will make loans that Banks won't? But if that's it, why do banks make loans to companies that make loans that by the bank's standards are too risky ?
BDC arent really more risky than anything else they are really just a tax wrapper. For a big Limited Partner “LP” i.e. insurance company, pension fund, SWF, depending on your tax status and jurisdiction using a BDC might be the most tax efficient ‘structure’ to access the private debt market. There are a few other structures out there, but the effect is the same: the LP gives money (in a fund structure, BDC or otherwise) to the General Partner (“GP” - i.e. Apollo, Ares, Blackstone, and a bunch of others) and GP goes and issued debt to data centers, small or large business, real estate, etc.
The one thing I think Noah didn’t really point out is that the above structure are mostly financed with equity—as opposed to 2008 where the loans were financed with bank balance sheet leverage—or at least show us some statistics. I haven’t seen statistics to this effect in a while (so i might be mistaken) but post-Dodd Frank we made it really expensive for banks to hold debt on their balance sheet which is what has driven the growth of the private credit market and despite that growth leverage in our overall financial system has come down.
Is it financed with equity, or is it just that the banks don't end up holding much of the paper (and the risk).
As I understand it, unlike 2008, this time banks aren't holding the risk, mostly just insurance companies pension funds, and other non-bank institutional investors. So instead of bank balance sheet problems, we'd end up with insurance company and pension fund balance sheet problems.
From a macro/systemic standpoint that means it plays out differently than if banks were also failing. This time the culprits will likely be counterparty risks, unfunded pensions, state guarantee funds fail, insurers cutting back underwriting especially for higher risk lines/markets, etc).
I just barely understand this stuff, so maybe I've got it all wrong.
The pension funds who hold this have 100% to 50%-ish equity backing the loans. It’s hard to see a situation (unlike banks) where they would be forced to fire sale assets.
Banks still do originate debt then syndicate it out to pension funds, insurance companies, etc. But I believe (i havent seen the stats in a while) there is a lot more equity in the system overall than 2008 when bank balance sheets were ~30x levered and now they are about half that or less.
Insurance companies went through their own Dodd-Frank-esque regulation as well (Solvency II) which made them quite a bit safer. And bear in mind insurance companies are much better places to do this kind of lending than banks because the duration of their liabilities is much longer (think about your long-term life insurance policies and annunities) than banks, which have extremely short duration liabilities (like SVB).
Your typical life insurance company has 8-10% in their Alts bucket, less for P&C. Maybe half of that 8%-10% is private debt with the other being private equity. The remaining 90% is investment grade bonds. Insurance regulation isn’t perfect but i for the most part the balance sheets are in good shape.
I should say the insurance regulation that most gives me heartburn is the move to domicile, or reinsure, your liabilities in Bermuda. Bermuda is good but have a bit more lenient treatment of asset-backed securities including CLOs. The US and Europe give reciprocity to Bermuda regulation so you see a lot of folks moving there. Maybe juices ROE for the insurance companies ~100bps or so.
What will happen to the debt portion of this AI 'boom' when one of these tech giants has a major breakthrough development that gives it clear market dominance? It seems like a repeat of the dot com boom, where only three titans remained standing on the battlefield.
Is this, as Noah indicates, that if most of the AI funding is non-debt, then only the share price of the losers will likely take a hit? But that if they go more heavily into borrowing, that victory for one will severely damage the others AND the credit market?
thanks
Is it moral hazard? If the downside scenario involves systemic risks to the economy (because everyone else is doing it) they calculate that they will be bailed out to some extent in the event of?
Expect a number of "DeepSeek" moments... that is software innovation which changes radically the need for hardware resources.
The other question is whether, as time goes on, the applications of generative AI are sufficiently compelling to generate the revenue to pay back all this capital expenditure.
Clearly, there will be revenue-generating applications. I'm not yet convinced they are anywhere near big enough in the medium term to justify the trillions being spent.
I think it could parallel the telecom boom where the raw infrastructure (eg AI engines) becomes somewhat commoditized and low margin from all the engines having to compete hard for applications to pick them to ride on top of.
It's unclear how much of AI use/revenue will be people directly interfacing w/AIs versus how much will be third party applications linking to AI engines through APIs
The thing about the telecom boom is that everybody at the time thought it's the telecom companies that will big - when ultimately, they ended up as commotitized infrastructure providers to those firms who really made the big buck (today's big tech).
So the question is if we could see a similar development this time, too. Big tech spending big, but others reaping the rewards. Seems still a remote possibility right now, but not impossible.
>Seems still a remote possibility right now, but not impossible.
Allow me to play devil's advocate and argue that commoditization is actually the default for LLMs.
It's already easy to write your app using a tool like litellm or OpenRouter so that switching between different LLM providers doesn't require any code rewrite.
Take this a little further, imagine that as the industry matures, companies will *automatically* switch their preferred LLM provider on a weekly or even daily basis, depending on which provider provides acceptable performance at the lowest cost. (Note that evals let you auto-benchmark what constitutes "acceptable performance", practically for free.)
Along *certain* dimensions, I am tempted to argue that there has *never* been an industry as easily commoditized as the LLM industry. Switching costs could become absurdly low. At least for real-life physical commodities, quality evaluations can be nontrivial, and it's cheaper to deal with the commodity provider whose physical location is closest to yours!
This argument could fail if certain LLM providers have "secret sauce", in the sense that their LLM can do tricks which competing LLMs cannot. But such "secret sauce" hasn't proven particularly durable thus far. And how many queries will actually require the "secret sauce" at any given time? In the long run, I expect to see companies automatically evaluate query complexity using tools like Flesch-Kincaid readability, then send the query to the cheap "commoditized" LLM vs the expensive "secret sauce" LLM depending on assessed complexity. And the net effect of progress in the field, over time, will be to shift more and more queries over to the cheap "commoditized" LLM.
So yeah, I think there's a solid case that this plays out very similar to telecom. AI investors could easily lose their shirts.
Look, it's hard to know, but it feels like we're still at the stage of the personal computer revolution of the early 1980s where people in their garage were building things in a few weekends, except that there is no competitive moat in a bit of prompt engineering and perhaps some model fine-tuning.
The massive fortunes in tech have been built on vendor lock-in and network effects, and there haven't been any obvious examples of either thus far.
The whole AI boom is from vendor lock-in. That's why Nvidia's market cap is now $4.2 trillion. They have some competition but it's difficult to switch. TSMC has a monopoly on the manufacturing of AI hardware and many of their suppliers also have monopolies. There's 3 companies (Amazon, Google Microsoft) that own almost all the GPUs that companies use to run their models. Since so many companies are already locked in to the Microsoft ecosystem, it's easier for them to use Microsoft's AI on all their Microsoft Word, Excel, PowerPoint, SharePoint, & Teams data.
Yes, but Nvidia are designing and selling shovels, TSMC are contract shovel manufacturers, and Amazon, Google and Microsoft are buying the shovels. What’s not clear is whether anyone is finding enough gold to justify all this expenditure on shovels in the first place.
I think it’s current, not credit, that will hamstring the AI data center buildout. When residential electrical rates spike and/or utilities are forced into inducing rolling blackouts because of data center demand, push will come to shove. And this is to say nothing of annually wasting billions of gallons of fresh water. Not a good scenario for future years of climate change-induced record heatwaves and extreme/extended droughts.
AI is one thing, but we live on a physical planet. We simply can’t afford to exponentially increase electricity and fresh-water consumption. The Big Tech behemoths and small business and the public are on a collision course. If and when push comes to shove, who pays what for electricity, and who gets to keep the lights on and air-conditioner running?
China overbuilt data centers, which it now hopes to turn into vendors. When the world’s attention is focused on the real estate crisis in China, it’s rolling out incredible solar power systems at scale.
Well, many of the data centers will use their own nuclear power plants (including restarting TMI in PA) and luckily the amount of power that can be extracted from uranium or thorium are so great that we are nowhere near stressing the planetary resources. Also, solar power doesn’t require continuous use of planetary resources either.
As for fresh water, that is not a requirement for data centers, they require water for cooling, but reclaimed water works fine, and in any case the cooling merely evaporates the water into vapor, it doesn’t use it up or make it dirty.
Water that has evaporated is lost to the cooling system and lost for other uses (agriculture, domestic or industrial). It may come back to earth as rain, elsewhere on land or, more likely, in the ocean. But it is "consumed" so far as use by humans is concerned.
As I understand it, data centers in areas with water scarcity can/are using closed loop systems that don't rely on evaporation.
The water issue is more regulatory than anything else. Solutions exist and they're not even that expensive. OTOH, if the DC operator isn't forced to, they'll go with cheaper evaporative cooling solution.
That said, water for cooling thermal power plants is another matter.
I just read an article about Texas data centers causing water restrictions for residents. The water is a huge issue and they are building them in a lot of drought prone areas in the south. Sorry I can’t remember ever it was I read it
Solar at this scales requires a tremendous amount of land.
Luckily, there's a huge amount of land in the desert southwest
Which would require a tremendous amount of transmission infrastructure to get where they’re building data centers.
Which continues to baffle me, because it’s much cheaper to build fiber infrastructure to carry the network where the power is than it is to build electrical infrastructure (let alone the ridiculous cost of new nuclear) to carry the power to where the data centers are currently being built. There are a few issues like latency but they shouldn’t be driving this kind of development.
The water use by the data centers is actually pretty small. I’ve seen projections that in about 5 years it might be comparable to the amount of water that humans drink. But that’s still pretty small compared to household use, which is small compared to industrial use, which is tiny compared to agricultural use.
It’s worth paying attention to 1-2% increases of water use, but it’s not worth getting angry about, since it can easily be fixed by raising prices so that whatever uses aren’t valuable get curtailed to send more water to the more valuable uses.
We've heard the 'scarcity is really a pricing problem' argument before - applied to what was then called peak oil. The argument fails when applied to a public good necessary for biological life. Who gets to decide which uses aren't valuable? In a market economy, those with ability to spend. Then the poor won't be able to afford water, or food. In that logic, it is then their lives that are not valuable.
If water were to skyrocket in price, by a factor of 5, poor people would still be able to afford water, though it might pinch some other parts of their budget a bit.
It's not that big of a deal if poor people who choose to live in the desert can't afford to use thousands of gallons per week watering their lawn. If they want to have a lawn, they can move away from the desert. We also don't need to grow alfalfa in the desert. You can buy your feed for pigs & cows from parts of the country that get enough rain. None of this is a crisis.
First, it’s generally not a good idea to think you know how someone feels about an issue. Confusing, deliberately or not, disappointment and skepticism with “angry” is a rhetorical prop. Second, why would anybody trust projections of water usage and waste from biased sources? For example, when Google wanted to build a server farm in my state, it required that its power consumption was not for public consumption. Increase water rates because poor people already don’t have enough trouble paying utility rates. Some areas near data centers have already lost access to potable water. Xai’s deisel generators are making the air unbreathable in some poor neighborhoods.
Have you found any projections higher than the one I mentioned? Why would you think the one I mentioned is a biased source?
You didn’t identify your source for projections. I cite my sources that use empirical data. I’m skeptical of “projections.” I don’t have time to do somebody else’s due diligence. There is no shortage of studies and white papers. I’m out of time to spend on vague projections. Best of luck.
How many times can the same actors cause untold economic destruction before we, I don’t know, rein them in?
With the railroads, it was two or three times. With the telecom companies it was once. The tech companies haven’t done it yet, so maybe it will be once.
Well, there was Windows Vista…
The "they" here is just humans. This has happened again and again throughout history.
Shades of 2008: Noah brings the dismal to the gold rush.
This is the knee-jerk reaction, as if deploying more nuclear power/nuclear proliferation is the magic wand that will solve the energy issue. Nuclear history is only a bit more than 75 years, and there have been plenty of accidents and irresponsible storage of nuclear waste. Hanford’s buried nuclear waste is leaching toward the Columbia River with nothing to stop it. If I used 75 years as a control to extrapolate the guaranteed safety of nuclear waste (half life of 225,000 years), I’d get laughed out of the lab. No, wait. Because it’s nuclear waste, it gets a pass. The Earth’s crust, as we know is fragile and under tremendous pressure from tectonic plates induction zone. The Earth annually experiences 5,000 earthquakes. Please show me and map and timeline if 235,000 years indicating where earthquakes will occur and their intensity. Perhaps the nuclear power industry has Nostradamus on the payroll. Look at how many nuclear power plants are built right on the “Pacific Ring of Fire.” Fukushima is one of many. The power plants near San Lois Obispo has had numerous safety problems and sits on the San Andreas Fault — one of the fastest-moving geological faults in the world. So, when it came up for renewal of its operating license, and after testimony documenting its history of close calls and safety violations, what was its response? It applied for another 25-years operating license.
Very smart people want to rush headlong into building more nuclear power plants. The big-tech behemoths of Silicon Valley want to build modular nuclear power units on-site at data centers. There is no way in hell these companies should entrusted with nuclear materials. Creating multiple soft targets for future terrorists is insane.
We’re still struggling to clean up past nuclear-waste mistakes. Last week, Fukushima had to stop pumping “treated” nuclear waste into the ocean because of a tsunami warning. Fukushima is still “cleaning up” after the previous tsunami. Pumping “treated” nuclear waste into the ocean is a good example of the cynical environmental lab phrase: the solution to pollution is dilution”:
“ . . . the Department of Energy began cleaning up the site in 1996. But the process has dragged on well past its initially projected completion date. Officials now say that cleanup activities will be complete by 2065.”
https://www.nytimes.com/2025/08/01/science/radioactive-wasps-nuclear-savannah-river.html
<insert power source> history is only a bit more than 75 years, and there have been plenty of accidents and irresponsible storage of <insert power source> waste.
"the guaranteed safety of nuclear waste (half life of 225,000 years), I’d get laughed out of the lab. "
To what exactly are you referring? The 7 long lived fission products?
https://pubs.geoscienceworld.org/ssa/bssa/article-abstract/112/5/2689/615140/Origin-of-the-Palos-Verdes-Restraining-Bend-and
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2022GL099220
Imagine that “leaving the next generation an option”:
https://law-in-action.com/2014/07/02/nuclear-power-taiwan/
Seismologists also study the Ring of Fire, as more than 80% of earthquakes with a magnitude of 8.0 or higher have occurred there. And other scientists approve of building nuclear power plants on the Pacific Ring of Fire. Again, very smart people deciding what the margin of safety should be for generations to come.
https://www.icanw.org/hanford_s_dirty_secret_and_it_s_not_56_million_gallons_of_nuclear_waste
If the power grid in unable to supply sufficient electricity, which it currently is not, will the AI productivity promises fail and the loans come due without the ability to pay them off? By hamstringing the development of renewable energy sources which can come online much faster than gas fired electrical plants, Trump may have planted the seeds of the next economic collapse.
There was an Odd Lots podcast episode on private credit a while back (maybe last summer) and the thing I took away from it was that private credit is crazy complicated and there seems to be a lot of gamesmanship in the dealmaking around things like seniority, covenants, ability to restructure, etc. It struck me that analyzing a deal seemed to require as many lawyers as finance people (maybe more lawyers).
I came away from that episode viewing private credit as sort of the credit market cousin of SPACS, except instead of retail, this time the marks are the ultra-rich and institutional investors.
$150 billion of banking industry exposure to Private Credit Funds is a fraction of the assets on bank balance sheets. Not of a size, even if it all were uncollectible, to cause a banking crisis. But….given Private Credit’s role in providing new term credit to small and medium sized businesses and the banks slow retreat from same, the health of private credit is important to credit intermediation away from the commanding heights of the US domestic economy. Definitely worth watching for strains in that sector.
I've read arguments that LLMs are the first step on the road to AGI and investing in them heavily will get us to it soon. I've also read arguments that LLMs are probably a dead end that will plateau in usefulness soon. Between those two ends of the spectrum there are more moderate arguments that it will be an important technology, but not lead to AGI.
I am not learned enough on this topic to have a strong opinion on which argument is closer to being correct. It seems like if the first one is right the data centers won't cause a crash, but if the second on is they almost certainly will. Even if the first argument is right and LLMs do lead to AGI, there could still be a bubble and a crash if it takes somewhat longer than expected. It certainly seems like a risky investment, although I suppose that if it pays off, it will really pay off.
I don’t think LLMs will directly lead to AGI. Even though they are very powerful now, at the end of the day they are still just predicting the next token in a sequence. This idea of “next-token prediction” has been around for a long time. Transformers made it work at scale, but the core idea didn’t really change. LLMs can look like they are thinking or planning, but that’s mostly because they’ve seen a lot of text and learned patterns from it. They don’t actually have goals, or understand what they’re saying, or think ahead like humans do. They don’t build a model of the world in their head or reason about consequences.
One can argue that humans are also “prediction machines,” and I get that point because our brains are always trying to guess what’s coming next. But I feel like human thinking is still very different. We can plan, adapt, and understand things deeply, not just copy patterns from data. LLMs are getting more advanced with tools, memory, and retrieval, so maybe they will get closer to AGI over time. But I don’t think next-token prediction alone is enough for true intelligence. It’s an important part, but not the full picture.
The current chatbot assistants only have an LLM as one layer. The next word prediction functions to give it the ability to generate sentences on any topic in any context. But it also has a separate layer of training that tells it to focus on sentences that humans find helpful in an assistant (which is why everyone paid so much more attention in December 2022, when ChatGPT was released, than they did in June 2020, when GPT 3 was released). And then in the past year they’ve started a whole new layer of training where the systems practice using their words to find solutions to reasoning problems - they’re no longer just predicting what a person would say, but are using words in patterns that have been found to effectively lead to solutions to problems.
Here is an interesting article about how LLMs "think": https://www.anthropic.com/research/tracing-thoughts-language-model
I agree with you modern language models are not just doing "next word prediction" anymore. In the paper, they show that Claude plans ahead when writing poetry. For example, it chooses the rhyme word like "rabbit" first, then builds the sentence to reach that word. Also, when doing math like 36 + 59, Claude uses different paths together one to guess the answer roughly, and another to get the exact digits. It’s not just copying training data, it’s really solving the task in its own way. But still, I don’t think this is AGI like human intelligence. These models don’t have real goals, no experience from the real world, and no true understanding. Even when it looks like the model is planning or thinking, it is just very smart pattern prediction, trained on huge text datasets. It is not thinking like humans with memory, emotions, or awareness.
At the same time, this becomes a bit philosophical. If humans are also just learning and repeating complex patterns from life, then maybe the line between machine and human intelligence is not so clear. But for now, I believe there is still a big difference. These models are powerful and impressive, but they are not real minds or conscious systems.
Yeah, it's absolutely not AGI, and I don't mean to claim that it's the same as human intelligence. All I mean to say is that it's using several strategies, and learning how to use them effectively towards its goals, rather than just copying human text. I suspect that at least some of the things going on in human minds work roughly similarly to what's going on in each of the types of systems that are in the chatbots at this point. I also conjecture that human intelligence is just a bunch more of these types of things - nothing super-special that couldn't be achieved over a few decades of these sorts of developments.
But I also don't think that working like humans is going to be sufficient for what people are imagining out of AGI - I don't think humans are truly general intelligences either, and I suspect that general intelligence is actually impossible, and that all intelligences are just specialized in different ways.
If interested, Gary Marcus has some very good analysis/content related to limitations of LLMs/GenAI, and the need for other AI techniques to reach AGI : https://substack.com/@garymarcus/posts
Even in the moderate scenarios where LLMs are useful, will they, as Noah asks, actually turn a profit? Revenue is the easy part.
The claim that data center don’t waste inordinate amounts of water doesn’t, um, hold water:
https://dgtlinfra.com/data-center-water-usage/
You should actually look at the numbers on that site. It looks like the biggest user there is Google, with about 5 billion gallons of water per year.
For comparison, a corn farm uses about 1000 acre feet of water per square mile. An acre foot is about 300,000 gallons. So Microsoft is using about as much water as 15 square miles of corn.
That’s obviously significant - I think it’s about comparable to the water use of a city like Los Angeles or Chicago. But it’s still pretty small on the scale of agricultural users of water, which is where most water goes.
That's obviously not significant because there are ~150,000 square miles of corn grown in the US.
quibble with the claim about eventually using all railroads; there were a TON of railroads that were redundant, went nowhere, or were never finished
https://en.wikipedia.org/wiki/List_of_unused_railways#United_States
the interurbans in the 1900s were also a huge bubble, but did less to tank the economy, so we don't talk about them as much
This seems to be mostly an ego thing- the hunt for AGI.
All of this spending is not necessary to pilfer personal data and sell ads.
On the other hand, think what Mag 7 (ex NVDA) margins will be like when they stop wasting so much money
On the other hand i've seen scientists claim they're now able do a hundred and fifty years worth of research in a month with AI
And I will believe their claims when I see 150 years of scientific research get done.
Machine learning has been used for a very long time in science and drug discovery.
I’m not sure the chat bots
are a step up from bespoke applications, but you are right - now everyone can have access to AI on the cheap rather than having to fund and develop their own app.
Well worth thinking about specifically:
a) Bank prudential regulators who ought to be looking a systemic risk, how much of bank assets are tied to one sector (AI) and potentially say "no"
b) The Fed who should be ready (unlike 2008) to "do what it takes" to prevent a financial crisis of AI lending from keeping inflation on (or at the moment above) target (as it came close to doing in 2020) and making sure everyone knows that it is going to do so.
In the graph it shows lots of data center spending by Microsoft, Meta, Amazon and Google, and all of these are working on LLMs except Microsoft, which has a deal with OpenAI, but is hosting for other companies as well as OpenAI.
Where is the spending by others, such as SoftBank who claim to be ready to spend half a $trillion partnering with OpenAI on StarGate? What above Xai which built the largest and quickest data centers in Memphis and wants to expand greatly? Tesla is also building data centers for its driving and robot AI. Isn’t Oracle spending also, and Apple must be building centers for its private AI cloud.
I wonder how much spending is also hidden. A mysterious group called Blue is trying to get a data center approved in my area, and the local government don’t really know who they represent and won’t until the power and (reclaimed) water supplies are approved.