J A B B Y A I

Loading

I’ve been listening to Better Offline, where tech journalist Ed Zitron takes a harsh view against the techno-optimism of the AI industry, arguing that the fundamentals don’t add up by any stretch.

In a recent 3 part episode, Zitron lays out how the generative AI market is a “deeply unstable” phenomenon, “built on vibes and blind faith,” and heading towards an “inevitable collapse”.

This assessment is largely corroborated by my own Gemini Deep Research, which concludes with “high confidence” that the current valuation of the AI sector “exhibits characteristics consistent with an asset bubble”.

The market’s singular focus on GPU sales and compelling “AI narratives” over tangible, profitable use cases creates a precarious foundation.

The likelihood of this “bubble” undergoing a “significant correction or ‘pop'” is assessed as “moderately high to high” within the next 12-24 months, driven by unsustainable burn rates, a pervasive lack of clear monetisation paths, and potential shifts in hyperscaler capital expenditure (CapEx) strategies.

Here’s a detailed look at why this bubble is believed to exist:

  • Extreme Market Concentration and Reliance on NVIDIA The US stock market’s stability is highly vulnerable due to its reliance on NVIDIA and the “Magnificent Seven” (NVIDIA, Microsoft, Alphabet, Apple, Meta, Tesla, and Amazon). These seven companies collectively account for approximately 33% to 35% of the total value of US stocks. NVIDIA’s market value alone accounts for about 19% of the Magnificent Seven and roughly 7.1% to 9% of the entire US stock market, making its influence outsized.

    NVIDIA’s soaring stock value is directly tied to its continued revenue growth, with significant year-over-year increases in data centre revenue. Crucially, more than 42% of NVIDIA’s revenue originates from just five of the Magnificent Seven companies (Microsoft, Amazon, Meta, Alphabet, and Tesla) continually buying more GPUs. This creates a “feedback loop” where hyperscalers invest massively in AI infrastructure, driven by the perceived necessity to lead, which in turn fuels NVIDIA’s revenue and stock price, reinforcing the “AI boom” narrative. The concern is not NVIDIA’s existence but that a “deceleration in its growth or a shift in hyperscaler purchasing patterns” could trigger a significant market re-pricing.

  • The Profitability Paradox: Massive Investment, Minimal Return Despite “colossal capital expenditures” by major tech companies, their AI initiatives yield “minimal to no profit”. The Magnificent Seven collectively planned to spend an “insane” over half a trillion dollars ($560 billion) between 2024 and 2025 on CapEx, overwhelmingly directed towards generative AI.

    However, the reported “AI revenue” often appears to be:

    • At-cost internal transfers. For example, a significant portion of Microsoft’s reported $13 billion annualized AI revenue comes from OpenAI’s spending on Azure cloud at “heavily discounted, near-cost rates”.
    • Inflated by non-AI components. Google’s estimated $7.7 billion in AI revenue likely includes non-AI components, such as subscriptions bundled with cloud storage.
    • General cloud growth attributed to AI, rather than direct AI product revenue.
    • Annualised projections from deeply unprofitable operations.

    Individual company examples include: * Microsoft generated about $3 billion in “real” AI revenue (excluding OpenAI’s at-cost spend) in 2025, against an $80 billion CapEx. * Amazon is estimated to make only $5 billion in AI revenue in 2025 on $105 billion CapEx. * Meta is “simply burning cash” on generative AI, with no clear product monetisation, despite expectations of $2-3 billion in GenAI-driven revenue in 2025. Most of its revenue (99%) still comes from advertising, with AI serving as an embedded feature. * Tesla does not appear to generate direct revenue from generative AI. Its AI company, xAI, reportedly burns $1 billion per month while generating only $100 million in annualised revenue. * Apple has taken an “asset-light approach to AI” with its Apple Intelligence, which is dismissed as “ineffective” and not a major revenue driver, despite $11 billion in CapEx.

    This collectively suggests a “distinct lack of clear, profitable, and substantial direct AI revenue streams” for the Magnificent Seven.

  • Leading AI Startups are Deeply Unprofitable The financial models of key AI startups like OpenAI and Anthropic further highlight the paradox, as both companies “lose billions of dollars a year”. For instance, OpenAI projected $12.7 billion in revenue for 2025 but reported an approximate $5 billion loss on $3.7 billion in revenue in 2024, with expenses including $3 billion for model training and $2 billion for running models. Anthropic anticipates a cash burn of $3 billion for 2025 despite reaching $4 billion in annualised revenue. These figures largely corroborate claims of substantial losses and reliance on continuous capital infusion.

    The use of “annualised revenue” (ARR) is criticised as misleading. While standard in SaaS, it obscures actual profitability and volatility, especially given high churn risk. Companies like Cursor, Perplexity, and Glean illustrate these challenges:

    • Cursor’s rapid growth to $500 million ARR was a “mirage,” achieved by “selling a product at a massive loss”. This led to “opaque terms of service” and “dramatically restricting access” for users, causing significant backlash.
    • Perplexity, a consumer AI company, lost $68 million on $34 million revenue in 2024, spending 167% of its revenue on compute services.
    • Glean, an enterprise search company, reached $100 million ARR but seemed to show stagnant growth in subsequent months, suggesting a “continued need for cash” and raising questions about underlying profitability.

    This dynamic represents a “Subprime AI Crisis,” where companies provide services at a loss, then raise prices or introduce “wildly onerous rates”.

  • Generative AI is a Feature, Not Infrastructure A core argument is that generative AI is “not infrastructure” and fundamentally differs from the developmental path of Amazon Web Services (AWS). AWS emerged from Amazon’s own necessity to manage massive web traffic and deliver software, eventually offering its surplus capacity as a service to others, meeting a “proven external market for this utility”. AWS became reliably profitable, driven by clear demand.

    In contrast, generative AI appears to be a “supply-driven model,” where powerful models are developed, and then companies actively seek compelling use cases. It “feels more like a feature of cloud infrastructure rather than the infrastructure itself”. Its use cases are generally limited to tasks like chatbots, summarisation, content generation, and coding assistance. This positioning, coupled with the inherent similarity of core LLM capabilities across models, leads to “rapid commoditization,” making it “exceedingly difficult to build a sustainable, profitable business”. It’s “near impossible” to build a “moat on top of LLMs,” as the valuable intellectual property remains with the model developers.

  • The “Agent” Fallacy and Misleading Capabilities The term “AI agent” is described as one of the “most egregious acts of fraud,” as companies often market advanced chatbots as autonomous agents capable of replacing human jobs. Salesforce’s “Agent Force,” for instance, is labelled a “goddamn chatbot program”.

    • Current “agents” are largely advanced chatbots with “limited autonomy and inconsistent performance on complex tasks”. Studies show current LLM agents achieve only modest success rates, typically around 58% in single-step tasks and significantly degrading to approximately 35% in multi-step settings.
    • OpenAI’s own demo of a ChatGPT agent for planning a wedding or a baseball itinerary took 21-23 minutes and produced confusing results, even in a pre-prepared demonstration.
    • Terms like “AGI” (Artificial General Intelligence) and “singularity” are criticised as manipulative attempts to suggest LLMs can create conscious intelligence. Even Meta’s chief AI scientist believes AGI won’t result from merely scaling up LLMs.
    • Stories about AI models “lying, cheating, and stealing” are often intentionally deceptive, implying autonomy when models are likely prompted to take these actions. This consistent use of inflated terminology contributes to an “expectation-reality gap” that fuels market hype.
  • Dependency on Unproven Entities The AI boom relies heavily on companies with limited experience or unstable financial footing. OpenAI’s future expansion is heavily dependent on partners like CoreWeave and Crusoe, neither of whom appear to have built a single AI data centre before.

    • CoreWeave’s expansion is “entirely driven by OpenAI,” and its financial health hinges on OpenAI fulfilling its massive $12 billion, five-year contract. CoreWeave’s debt payments could “balloon to over $2.4 billion a year by the end of 2025,” far outstripping its cash reserves.
    • Crusoe, a former cryptocurrency mining company, is tasked with building 1.2 gigawatts of data centre capacity for OpenAI at the Stargate project, despite no prior experience in AI data centres. The Stargate project itself is reportedly behind schedule.
    • Core Scientific, CoreWeave’s data centre developer, was bankrupt last year and has no experience building AI data centres; its operations are based on Bitcoin mining infrastructure that needs to be “bulldozed” for AI compute.
  • SoftBank’s Strain and Funding Challenges SoftBank’s immense financial commitments to OpenAI and the Stargate data centre project (estimated to be between $52 billion and $62 billion) are putting it in “dire straits”.

    • SoftBank had to borrow all of the initial $10 billion for OpenAI’s $40 billion funding round.
    • Its financial condition will “likely deteriorate” due to the OpenAI investment, potentially leading to a credit rating downgrade.
    • OpenAI needs to convert to a for-profit entity by December 2025 or lose $10 billion of its funding, a process considered “extremely difficult and extremely unlikely”.
    • OpenAI’s costs are projected to surpass $320 billion between 2025 and 2030, requiring “at least $40 billion every single year” in funding. It’s unrealistic to expect SoftBank or other benefactors to provide this “infinite resources” indefinitely.

Rebuttals to Optimists

Zitron directly address and dismiss several common arguments made by AI optimists:

  • “Amazon Web Services (AWS) also lost money initially, so AI will too.” This is one of the “most annoying and consistent responses”. The rebuttal is that AWS’s trajectory was fundamentally different:

    • Necessity-driven: AWS was an “outgrowth of Amazon’s own infrastructure,” built out of necessity to support its rapidly expanding e-commerce business. It solved a clear, proven internal need before becoming an external service.
    • Cost-effective Scaling: AWS leveraged “surplus capacity Amazon already owned,” making its initial direct costs “minuscule”. It made an existing practice (running web applications) “better and scaled it”.
    • Clear Demand: There was an established, proven demand for web applications, and AWS made it cheaper and more flexible to run them.
    • Profitability Path: Amazon.com became profitable in 2003, and AWS itself reached break-even by 2009 and consistent profitability by 2015. Its capital expenditures were a “fraction of the cost” of current AI spending.
    • Generative AI is supply-driven: Unlike AWS, generative AI is a “supply-driven model,” with powerful models developed first, and then companies actively seeking profitable use cases. It functions more as a “feature” than a foundational infrastructure.
  • “The cost of inference is coming down.” Zitron asserts there is “no proof of this statement”. While the cost of tokens may be decreasing, this is “not the cost of inference going down”. Larger, more “reasoning heavy” models like Claude Opus four often cost more to run. The price model developers charge is also not equivalent to the true cost of inference. Companies struggle with “massive spikes in costs that come from their power users,” making budgeting difficult.

  • “ASICs (Application-Specific Integrated Circuits) will reduce costs.” The feasibility and impact of ASICs are questioned:

    • Timing: It’s unclear when these chips will be ready (e.g., OpenAI and Broadcom aiming for 2026).
    • Production Challenges: Producing high-performance silicon requires booking capacity with a limited number of foundries (Samsung, TSMC) well in advance, and production runs can take weeks. Microsoft, for instance, has reportedly “failed to create a workable reliable ASIC”.
    • Infrastructure & Retrofitting: These chips require “far more powerful cooling and server infrastructure” and would necessitate retrofitting entire data centres. This is a “lot of money” and time.
    • Impact on NVIDIA: Even if successful, it “still fucks up the AI trade because NVIDIA still needs to sell GPUs”.
  • “The government will bail them out or fund them.” Zitron dismisses this as an unrealistic “doomer philosophy”.

    • Insufficient Funds: Government contracts, like the Department of Defense giving $200 million, are simply not enough to “plug” the multi-billion dollar losses of companies like OpenAI. OpenAI needs “like $10 billion of free money every year” to become profitable.
    • Nature of the Bubble: Unlike the 2008 financial crisis where bailouts plugged holes in failed banks, the AI trade is based on the “continued and continually increasing sale and use of GPUs”. There’s “no plugging that hole” if demand for GPUs slows, as companies are “losing money the second they’re installed”.
  • “AI agents will eventually work and replace jobs.” This is heavily criticised as a “blatant fucking lie” and “manipulative attempt to boost stock valuations”.

    • Chatbot Functionality: “Agent Force” from Salesforce is merely a “chatbot program”.
    • Low Success Rates: Research shows “agents in general only achieve around 58 percent success rate on single step tasks” and a “depressing 35% of the time” for multi-step tasks.
    • Lack of Autonomy: These products are “not autonomous agents” and lack true intelligence. They can make lists or trigger events via APIs, but don’t “take actual actions” as LLMs cannot do so.
    • Negative Impact on Productivity: A study found that AI coding tools, despite developer beliefs, actually made engineers 19% slower.
    • Ethical Concerns: The excitement about AI replacing workers is viewed as “gross” and reporters are urged to “review their biases”. The creation of “conscious intelligence” without personhood is likened to creating a “new kind of slave”.
  • “AGI (Artificial General Intelligence) or the singularity is coming.” These terms are seen as “manipulative” and used to “obfuscate the actual abilities of large language models”. The concept of AGI is considered “fictional,” with even Meta’s chief AI scientist stating it won’t come from simply scaling up LLMs. Stories about models “lying, cheating, and stealing” are intentionally deceptive, implying a non-existent autonomy.

  • “Companies are seeing growth from AI.” This is often “hand-waving to avoid telling you how much money these services are actually making them”. If they were making good money, they “wouldn’t shut the fuck up about it”. Much of the reported “AI revenue” is “internal transfers at cost, general cloud growth, or bundled services where AI is a feature rather than the primary revenue generator”.

In essence, the AI bubble is described as an “unsustainable investment,” lacking profitability, built on a “fragile interdependence on a few key players and their hardware purchases,” and fuelled by “speculative narratives” rather than tangible, profitable applications.

submitted by /u/Odballl
[link] [comments]

Leave a Comment