Loading
I’ve been listening to Better Offline, where tech journalist Ed Zitron takes a harsh view against the techno-optimism of the AI industry, arguing that the fundamentals don’t add up by any stretch.
In a recent 3 part episode, Zitron lays out how the generative AI market is a “deeply unstable” phenomenon, “built on vibes and blind faith,” and heading towards an “inevitable collapse”.
This assessment is largely corroborated by my own Gemini Deep Research, which concludes with “high confidence” that the current valuation of the AI sector “exhibits characteristics consistent with an asset bubble”.
The market’s singular focus on GPU sales and compelling “AI narratives” over tangible, profitable use cases creates a precarious foundation.
The likelihood of this “bubble” undergoing a “significant correction or ‘pop'” is assessed as “moderately high to high” within the next 12-24 months, driven by unsustainable burn rates, a pervasive lack of clear monetisation paths, and potential shifts in hyperscaler capital expenditure (CapEx) strategies.
Here’s a detailed look at why this bubble is believed to exist:
Extreme Market Concentration and Reliance on NVIDIA The US stock market’s stability is highly vulnerable due to its reliance on NVIDIA and the “Magnificent Seven” (NVIDIA, Microsoft, Alphabet, Apple, Meta, Tesla, and Amazon). These seven companies collectively account for approximately 33% to 35% of the total value of US stocks. NVIDIA’s market value alone accounts for about 19% of the Magnificent Seven and roughly 7.1% to 9% of the entire US stock market, making its influence outsized.
NVIDIA’s soaring stock value is directly tied to its continued revenue growth, with significant year-over-year increases in data centre revenue. Crucially, more than 42% of NVIDIA’s revenue originates from just five of the Magnificent Seven companies (Microsoft, Amazon, Meta, Alphabet, and Tesla) continually buying more GPUs. This creates a “feedback loop” where hyperscalers invest massively in AI infrastructure, driven by the perceived necessity to lead, which in turn fuels NVIDIA’s revenue and stock price, reinforcing the “AI boom” narrative. The concern is not NVIDIA’s existence but that a “deceleration in its growth or a shift in hyperscaler purchasing patterns” could trigger a significant market re-pricing.
The Profitability Paradox: Massive Investment, Minimal Return Despite “colossal capital expenditures” by major tech companies, their AI initiatives yield “minimal to no profit”. The Magnificent Seven collectively planned to spend an “insane” over half a trillion dollars ($560 billion) between 2024 and 2025 on CapEx, overwhelmingly directed towards generative AI.
However, the reported “AI revenue” often appears to be:
Individual company examples include: * Microsoft generated about $3 billion in “real” AI revenue (excluding OpenAI’s at-cost spend) in 2025, against an $80 billion CapEx. * Amazon is estimated to make only $5 billion in AI revenue in 2025 on $105 billion CapEx. * Meta is “simply burning cash” on generative AI, with no clear product monetisation, despite expectations of $2-3 billion in GenAI-driven revenue in 2025. Most of its revenue (99%) still comes from advertising, with AI serving as an embedded feature. * Tesla does not appear to generate direct revenue from generative AI. Its AI company, xAI, reportedly burns $1 billion per month while generating only $100 million in annualised revenue. * Apple has taken an “asset-light approach to AI” with its Apple Intelligence, which is dismissed as “ineffective” and not a major revenue driver, despite $11 billion in CapEx.
This collectively suggests a “distinct lack of clear, profitable, and substantial direct AI revenue streams” for the Magnificent Seven.
Leading AI Startups are Deeply Unprofitable The financial models of key AI startups like OpenAI and Anthropic further highlight the paradox, as both companies “lose billions of dollars a year”. For instance, OpenAI projected $12.7 billion in revenue for 2025 but reported an approximate $5 billion loss on $3.7 billion in revenue in 2024, with expenses including $3 billion for model training and $2 billion for running models. Anthropic anticipates a cash burn of $3 billion for 2025 despite reaching $4 billion in annualised revenue. These figures largely corroborate claims of substantial losses and reliance on continuous capital infusion.
The use of “annualised revenue” (ARR) is criticised as misleading. While standard in SaaS, it obscures actual profitability and volatility, especially given high churn risk. Companies like Cursor, Perplexity, and Glean illustrate these challenges:
This dynamic represents a “Subprime AI Crisis,” where companies provide services at a loss, then raise prices or introduce “wildly onerous rates”.
Generative AI is a Feature, Not Infrastructure A core argument is that generative AI is “not infrastructure” and fundamentally differs from the developmental path of Amazon Web Services (AWS). AWS emerged from Amazon’s own necessity to manage massive web traffic and deliver software, eventually offering its surplus capacity as a service to others, meeting a “proven external market for this utility”. AWS became reliably profitable, driven by clear demand.
In contrast, generative AI appears to be a “supply-driven model,” where powerful models are developed, and then companies actively seek compelling use cases. It “feels more like a feature of cloud infrastructure rather than the infrastructure itself”. Its use cases are generally limited to tasks like chatbots, summarisation, content generation, and coding assistance. This positioning, coupled with the inherent similarity of core LLM capabilities across models, leads to “rapid commoditization,” making it “exceedingly difficult to build a sustainable, profitable business”. It’s “near impossible” to build a “moat on top of LLMs,” as the valuable intellectual property remains with the model developers.
The “Agent” Fallacy and Misleading Capabilities The term “AI agent” is described as one of the “most egregious acts of fraud,” as companies often market advanced chatbots as autonomous agents capable of replacing human jobs. Salesforce’s “Agent Force,” for instance, is labelled a “goddamn chatbot program”.
Dependency on Unproven Entities The AI boom relies heavily on companies with limited experience or unstable financial footing. OpenAI’s future expansion is heavily dependent on partners like CoreWeave and Crusoe, neither of whom appear to have built a single AI data centre before.
SoftBank’s Strain and Funding Challenges SoftBank’s immense financial commitments to OpenAI and the Stargate data centre project (estimated to be between $52 billion and $62 billion) are putting it in “dire straits”.
Rebuttals to Optimists
Zitron directly address and dismiss several common arguments made by AI optimists:
“Amazon Web Services (AWS) also lost money initially, so AI will too.” This is one of the “most annoying and consistent responses”. The rebuttal is that AWS’s trajectory was fundamentally different:
“The cost of inference is coming down.” Zitron asserts there is “no proof of this statement”. While the cost of tokens may be decreasing, this is “not the cost of inference going down”. Larger, more “reasoning heavy” models like Claude Opus four often cost more to run. The price model developers charge is also not equivalent to the true cost of inference. Companies struggle with “massive spikes in costs that come from their power users,” making budgeting difficult.
“ASICs (Application-Specific Integrated Circuits) will reduce costs.” The feasibility and impact of ASICs are questioned:
“The government will bail them out or fund them.” Zitron dismisses this as an unrealistic “doomer philosophy”.
“AI agents will eventually work and replace jobs.” This is heavily criticised as a “blatant fucking lie” and “manipulative attempt to boost stock valuations”.
“AGI (Artificial General Intelligence) or the singularity is coming.” These terms are seen as “manipulative” and used to “obfuscate the actual abilities of large language models”. The concept of AGI is considered “fictional,” with even Meta’s chief AI scientist stating it won’t come from simply scaling up LLMs. Stories about models “lying, cheating, and stealing” are intentionally deceptive, implying a non-existent autonomy.
“Companies are seeing growth from AI.” This is often “hand-waving to avoid telling you how much money these services are actually making them”. If they were making good money, they “wouldn’t shut the fuck up about it”. Much of the reported “AI revenue” is “internal transfers at cost, general cloud growth, or bundled services where AI is a feature rather than the primary revenue generator”.
In essence, the AI bubble is described as an “unsustainable investment,” lacking profitability, built on a “fragile interdependence on a few key players and their hardware purchases,” and fuelled by “speculative narratives” rather than tangible, profitable applications.
submitted by /u/Odballl
[link] [comments]