On the following "Odd Lots" podcast, financial analyst and MIT fellow Paul Kedrosky argues that the AI boom is something historically unique and uniquely dangerous: a "meta-bubble" that combines the riskiest elements of every major financial crisis into a single, unprecedented event.
Beyond the story of the multi-bubble (which is probably a better term then Meta-Bubble to avoid confusion with the company):
One of the podcast co-host, Tracy Alloway, also brough up the issue of how private credit used to be called shadow banking:
Kedrosky points out that the entire shadow banking industry is $1.7 trillion dollars.
The episode also sheds light on the depreciation of the AI chips. Why does this matter? For those following Dr. Michael Burry, of Big Short fame, has delisted his Scion Asset Management after two important announcements. Firstly, he said he is shorting Palantir and Nvidia. Secondly, he raised the alarm the changes in depreciation policies and the tech firms (see his tweet here), which he sees has overstated earnings. However, look to point number 3 in this post to get Kedrosky’s take.
The other piece of context is to understand how much leverage is now linked to the AI Boom/Bubble:
“…what's increasingly happening is the problems they're solving are really
mundane. And so it's things like I'm trying to onboard a bunch of new suppliers
right now the people have weird zip codes and they sometimes don't match up. I
have a dude in the back who fixes that I’d rather have someone who could do it
faster so I could onboard a lot more suppliers. It turns out these small
language models are really good at that these micro models like IBM's Granite
and whatever else but those things require a fraction of the training are very
cheap..”
Podcast Key Takeaways
1. It's Not Just a Tech Bubble; It's a "Multi-Bubble"
Paul Kedrosky's central thesis is that the current AI boom is not just another technology bubble; it's a "meta-bubble" (see comments above about why I think it should be the multi-bubble) He argues that for the first time in history, all the key ingredients of every major historical bubble have been combined into a single event, creating a situation of unparalleled risk.
Kedrosky identifies four core components that are
simultaneously at play:
• A Real Estate Component: Data centers,
the physical heart of the AI buildout, are a unique asset class sitting at the
intersection of industrial spending and speculative real estate. This brings
the property speculation element of past crises directly into the tech boom.
• A Powerful Technology Story: The
narrative around AI is one of the most compelling technology stories ever told,
comparable in scope to foundational shifts like rural electrification. This
powerful story fuels investment and speculation on a massive scale.
• Loose Credit: The financing of the boom
is being supercharged by loose credit, with a crucial distinction from past
cycles: private credit has now largely supplanted traditional commercial banks
as the primary lenders in this specific buildout.
• A Government Backstop: An "existential competition" narrative, framing the AI race as a critical national security issue between the US and China, has created a sense of a limitless, government-endorsed spending imperative. Nations around the world are pursuing "sovereign AI," suggesting capital is no object.
2. The Financing Looks Frighteningly Similar. It was used by Enron.
The financial engineering behind the AI boom rhymes with the complex and opaque structures central to the 2008 financial crisis. Even cash-rich tech giants are increasingly using Special Purpose Vehicles (SPVs), a move designed to keep massive amounts of debt off their balance sheets. The motivation, according to Kedrosky, is to avoid upsetting shareholders about diluting earnings per share to fund these colossal projects. The Byzantine complexity of these SPV structures, he notes, looks like the "forest with all the spiderwebs".
This structure incentivizes a dangerous blending process. To make the data center asset more attractive as a financial instrument, sponsors combine stable, low-yield tenants like hyperscalers with "flightier tenants" who pay much higher rates. This blending improves the overall yield, making it easier to securitize and sell to investors.
See here for details around Meta’s and x.ai’s use of SPV, see this article. And for a refresher on how Enron used SPVs to hide its debt from investors, check out this article.
3. The Assets Have a Short Expiration Date
A critical flaw in the AI financial structure is a dangerous
"temporal mismatch" between long-term debt and short-lived assets.
This risk is being actively obscured by accounting maneuvers. Kedrosky points
out that around four years ago, tech companies extended the depreciation
schedules for data center assets. This was done, however, just as the AI
buildout began relying on GPUs with dramatically shorter lifespans.
Yet these short-lived GPUs are the core collateral for loans stretching out 30 years. This creates an "unprecedented temporal mismatch" and a constant, significant refinancing risk that will come to a head in the coming years when a massive wave of these debts comes due.
4. The Business Models Run on "Negative Unit Economics"
Before diving into the flawed economics, Kedrosky offers a
crucial disclaimer: "AI is an incredibly important technology. What we're
talking about is how it's funded." The problem is that the core products
are fundamentally unprofitable. Unlike traditional software, where fixed costs
are spread across more users, the costs for large language models (LLMs) rise
more or less linearly with use. This leads to what is termed "negative
unit economics."
"...a fancy way of saying that we lose money on every sale and try to make it up on volume..."
When confronted with this reality, the justification for the
massive capital expenditure shifts to what Kedrosky calls "faith-based
argumentation about AGI." He cites a recent investment bank call where
analysts justified the spend using a top-down model. First, they calculated the
"global TAM for human labor," then simply assumed AI would capture
10% of it. Kedrosky points out that such a number is hard to pin down in terms
of exact figures.
5. We're Betting Trillions on Potentially Inefficient
Technology
A counter-intuitive risk is that the entire technological path the US is on may be a bloated, inefficient dead end. The current American strategy focuses on building ever-larger, computationally intensive models. This stands in stark contrast to China's "distillation" or "train the trainer" approach, where they use large models to train smaller, highly efficient ones. (See in the intro the use of IBM's Granite as an example of this observation)
This suggests huge efficiency gains are possible. Kedrosky notes that the transformer models underlying today's LLMs went from the lab to market faster than almost any technology in history, and as a result, they are "wildly inefficient and full of crap."
The implication is profound. If massive efficiency gains are achievable, as China's approach suggests, it means that the current forecasts for future data center demand are likely "completely misforecasting the likely future the arc of demand for compute." The entire financial model is based on a technological path that may already be obsolete.
Closing thoughts
Many contend that we are in AI Bubble. And it’s hard to
argue against that. The patterns of technology investments, whether it was the
dotcom bubble of the 1990s, the radio bubble of the 1920s, or the railway bubble
of the 1840s, there is a consistent pattern of investors engaging in a euphoric
rush to capture a “powerful technology story”. The key challenge will be the
downstream effects of containing the bursting of the bubble. We have seen how
the clean-up for the 2008 financial crisis was “in progress” and then COVID
hit. Inflation
is still running high – an after effect of that last crisis. How much room
is left for further maneuvering? Unfortunately, this is something that we will
have to wait and see how things turn out.
No comments:
Post a Comment