‘HaaS’: AI Hype economy decay?

Posted by

·

‘HaaS’ Badge image, Courtesy of Stephen Klein, 2025.

‘HYPE-AS-A-SERVICE (Haas)’: In the AI Economy’s Circular Conundrum of AI Hype’s Profits, Predictions, and Synthetic Perils

Generative AI’s Self-Consuming Paradox

The generative AI revolution is eating itself—quite literally. And it’s doing so with a rather voracious appetite.

While I was scrolling through my LinkedIn feed yesterday, I stumbled upon two rather illuminating posts from Stephen Klein, Co-Founder & CEO of Curiouser.AI and Berkeley instructor. His insights into the current state of the AI economy are, to put it mildly, sobering. Allow me to share what I learned before my morning coffee could properly kick in.

Teaching the Internet to Eat Itself

In the beginning, AI models learned from us humans. They consumed the internet, digesting our collective knowledge without asking for permission first. By some estimates, the industry sidestepped approximately $1.7 billion in licensing costs by simply scraping data instead of properly licensing content.¹

But this free lunch is coming to an end. With lawsuits piling up faster than my unread emails, AI companies are pivoting to a rather peculiar solution: using their own AI to generate training data for newer models.

It’s a form of technological cannibalism that’s frighteningly efficient. It’s fast, scalable, and neatly avoids those pesky legal entanglements. But it opens Pandora’s box to something far more troubling.

The mechanism is elegantly simple in its danger. When AI models train on content generated by earlier models, they enter a recursive loop. No company needs to explicitly ask permission to train on another’s outputs—they just scrape public data. If synthetic AI content appears anywhere online—blogs, forums, code repositories, Substacks, Reddit—it inadvertently becomes training material for the next generation.

Even with the best filtering techniques, detecting synthetic data reliably at scale is like finding a specific needle in a stack of nearly identical needles. Meanwhile, synthetic data is flooding the internet faster than humans can create authentic content:

  • Humans create slowly
  • AI generates millions of new pages daily
  • Competition ironically accelerates this cannibalism

The consequences? A 2023 study from Stanford and the University of Toronto found that recursive training leads to “irreversible performance decay” over generations.² Models that once reflected the world will soon only reflect themselves—a narrowing mirror that eventually shows nothing at all.

The $100 Billion Illusion

Now, let’s follow the money, shall we? By 2026, the GenAI market is projected to surpass $100 billion. Every boardroom wants in, every headline demands urgency.

And yet…

The economics tell a different story. GenAI vendors like OpenAI and Anthropic are hemorrhaging billions annually with no clear path to profitability. OpenAI reportedly lost approximately $5 billion in 2024, with a daily operational cost for ChatGPT of around $700,000.³ Meanwhile, Anthropic is burning through $2.7 billion in 2024 cash flow while hoping to generate $3.7 billion in revenue.?

The Fortune 500 monetization narrative isn’t much better:

  • Google reported strong Q1 earnings in 2025, citing AI, but gains came primarily from enhanced ads, not new products.?
  • Microsoft claims a $3.70 ROI for every $1 spent on AI, but the study was commissioned by Microsoft itself, with no named Fortune 500 case studies.?
  • Most “productivity gains” remain anecdotal, measured in hours saved rather than dollars earned.

So far, the clearest enterprise result of GenAI appears to be headcount reductions. Not quite the utopian vision we were promised, is it?

Sharpening Minds in the genAI* Age

Critical Thinking: The AI-Proof Skill

How Might We ensure our minds stay sharp and discerning when faced with a tsunami of AI-generated content flooding our screens and earbuds every second?

*genAI, or GAI is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data

Two examples below => of the proliferation of genAI and of ‘HYPE-AS-A-SERVICE’ (HaaS)

Read this instructive LinkedIn post on ‘HaaS‘ (by Stephen Klein, April 2025, thrilled to repost their ‘HaaS’ badge, here )


‘HaaS’, example 1:
What’s the use of not-me avatars?

Here are mynot-me’ avatars (in x4 automatic versions). Testing avatars using AI Flux, on Krea.ai.


– Hugh? Blast, these versions of ‘me’ look really much better than the original

‘HaaS’, example 2:
What’s the use of genAI video podcasts?

Introcucing Yann LeCun’s maths critique of AI learning and generally, on genAI.

Who’s Actually Profiting?

If you’re looking for the real winners in this economy, look no further than consultants and self-proclaimed experts:

  • Accenture: $900 million in GenAI revenue in 2024, with $3 billion in bookings.?
  • McKinsey: No public revenue numbers, but one of the loudest voices proclaiming “AI now or never.”
  • LinkedIn is awash with “GPT-powered” webinars, $2,000 cohort programs, and slide decks promising to future-proof your strategy.

This isn’t innovation—it’s monetized anxiety. Are we using GenAI to solve real problems, or just optimizing slide decks and charging for the privilege?

Teaser Video ‘Dual podcast‘: an open dialogue?

I created this video teaser with Podcast video GenAI HeyGen new function (‘Dual podcast’).
This is proof – if need be – that socially un-intelligent talking robots (albeit very licked up GenAI) even when given texts to learn and recite, end up showing all too evidently what its textual discourse tries to explain and proclaims: the need for a new approach, more socially intelligent, human-level AI. (watch the teaser: 4:53min)

aiPod 018 – Teaser Video Podcast produced in HeyGen. (4:53min)

Interact with Aiko & Blaise Converse!

Screenshot: NotebookLM > Studio > Audio Overview > Interactive mode (Beta)

Get your personal login link to this Google NotebookLM:
aiPod 018 – AI Personas: Limitations and Divergence from Reality

Email: jerome-bertrand (at) kinokast.eu

MindMap

MindMap, aiPod 018 – AI Personas Limitations and Divergence from Reality

AI Learning Limitations: Why Our Digital Assistants Still Don’t “Get” Us

Curious about the gap between human and artificial intelligence? Our aiPods examine why AI personas like our hosts Aiko and Blaise (my descriptive AI personas) struggle to truly understand social cognition and human reasoning.

Two Learning Approaches, Same Fundamental Questions

Explore our latest episodes that tackle AI’s cognitive limitations:

Both episodes critically examine why traditional AI learning methods—supervised learning, reinforcement learning, and even the much-hyped large language models with their sequential prediction approach—fall short of human cognitive capabilities.

The UX Research Problem

Our discussions reveal how AI-generated personas in UX research present significant accuracy issues. These digital stand-ins simply cannot represent real users authentically, hampered by inherent biases and a fundamental lack of real-world understanding.

Missing: World Models

What’s the missing piece? “World models”—the mental frameworks that help humans grasp common sense and physical dynamics. In aiPod 018, Aiko and Blaise converse about promising alternatives like Energy-Based Models and Joint Embedding Predictive Architectures that might build more robust world models in AI systems.

The Social Nature of Intelligence

Perhaps most importantly, our hosts explore how human reasoning is inherently social—and what this means for developing truly collaborative AI. They emphasize that ethical considerations and thoughtful constraints aren’t limitations but essential ingredients for beneficial AI development.

The verdict? Despite technological advances, real user data remains irreplaceable in UX research, and new paradigms in AI development—especially for fields like e-learning—must carefully consider both technical approaches and ethical implications.

A Moment of Reflection

I’m cautiously optimistic about GenAI’s long-term potential. The technology itself is genuinely revolutionary. But we need to separate the signal from the noise, the substance from the hype, the human from the machine.

As Klein aptly puts it: “The trick with technology is to avoid spreading darkness at the speed of light.”

Perhaps the most valuable AI won’t be the one that replaces human intelligence, but the one that genuinely augments it—helping us make better decisions rather than making decisions for us.

In the meantime, I’ll continue watching this space with equal parts fascination and skepticism. And maybe, just maybe, I’ll think twice before asking an AI to generate my next blog post.


Footnotes:

¹ Internal cost analysis based on industry licensing benchmarks and token usage; see also:

  • Reddit–Google API deal ($60M/year)
  • Authors Guild lawsuit against OpenAI
  • New York Times lawsuit against OpenAI

² Shumailov et al. (2023) — “The Curse of Recursion: Training on Generated Data Creates Model Collapse” (Stanford, Toronto, Rice University)

³ Business Insider & LessWrong: OpenAI’s reported loss and cost structure.

? The Information: Anthropic cash burn and revenue goals.

? Business Insider: “Alphabet’s Q1 2025 earnings driven by Search and YouTube ad revenue, with nods to AI integration.”

? Microsoft Blog: “IDC white paper commissioned by Microsoft claims $3.70 ROI per $1 invested in GenAI.”

? Reuters: Accenture’s 2024 GenAI revenue and bookings.

jerominus Avatar
Architecture photography and visual design - Jerome Bertrand a.k.a. Prosper Jerominus

About the author

Bonjour! Hello! I’m Jerome Bertrand, a French-born UX and AI designer and educator, photographer based in The Netherlands.
I am the founder-runner of kinokast.eu.
Through my blogs, I share educational journeys across the design world whilst offering my 2C coaching insights on personal development, best design practices, innovative methodologies, and critical thinking – I hope worth revisiting regularly.
You can listen to my AI-driven podcasts and join and interact with the aiPods that I research, curate, and organise around carefully selected topics. My Bio