The AI Bubble: When Will It Burst?
Everyone's using AI to generate content these days – blog posts, videos, podcasts – but honestly, who’s really interested in what the bot has to say? If you’re like me, you’re probably scanning more and reading less, instinctively filtering out anything that feels AI-generated. Even reading this, you might be skimming to decide if it’s worth your time. And that’s the problem with the AI bubble: it’s been fed by novelty and hype, not by meaningful contribution. So, when will this bubble burst?
The allure of AI is understandable. It promises to produce endless content quickly, catering to companies that see journalism and creativity as an inconvenience to their unethical doings, or just too expensive. But what’s often missed in this excitement is that AI doesn’t generate new thoughts, ideas, or perspectives. Instead, it’s built on historical data, trained on patterns of human expression, recycling what already exists without ever offering anything original. And because it lacks this creative spark, AI content often comes across as generic or even inaccurate – the so-called “hallucinations” that these tools are known for, which can really be called “lies”.
Hallucinations and Why No One Notices
Second to environmental toll, one of the most concerning aspects of AI is its tendency to hallucinate – a soft term for when AI generates factually incorrect or entirely fabricated information. These hallucinations range from subtly misleading details to full-scale errors that can go unchecked. The most troubling part? People hardly pay attention to this. As users, we’ve become so conditioned to accept content at face value, especially when it’s presented with an air of authority, that we often fail to question its veracity.
The problem is compounded by the speed and volume of AI-generated content flooding our screens. We are overwhelmed with information, and verifying every claim or detail isn’t just challenging – it’s nearly impossible. So, hallucinations slip through, unchallenged and unnoticed, spreading misinformation under the guise of convenience. The fact that AI models “speak” with such confidence only reinforces the issue; readers are more likely to assume accuracy when information is delivered with certainty, even when it’s blatantly incorrect.
But what happens when more people start paying attention? When the collective awareness of these hallucinations grows, the public may begin to question the value of AI content more critically. The trust propped up by novelty and blind faith in technology could start to erode, revealing the AI bubble for what it truly is: a fragile construct built on recycled data and half-truths. This realisation could be the tipping point. The more people understand that much of what AI generates can’t be trusted without human oversight, the closer we come to the moment when the AI bubble finally bursts.
The Content Farm Crisis
For content farms, AI is a dream come true. Where once they had to hire writers, AI now enables them to churn out massive quantities of text, much of which ends up being low-quality filler. It’s like a firehose of mediocrity aimed at the internet, creating digital noise rather than value. This can be particularly dangerous when it comes to emerging AI podcast technology, where AI “conversationalises” articles, potentially distorting the original author’s intent. So not only are these tools scraping the surface of real content, they’re also muddling it up for listeners, giving rise to a sort of AI-filtered distortion of reality.
And the risks go further. Combined with deepfake technology, this “AI content” can be weaponised to mislead and scam, blending fabricated audio and video into believable but false narratives. These tools are perfect for anyone wanting to cut costs or spread misinformation – and AI makes it easier than ever to do just that.
The Environmental Toll of AI
For all its supposed efficiencies, AI is anything but efficient. If AI were a country, it would rank as the fifth-largest consumer of energy in the world. Training large language models demands an astronomical amount of electricity and water, putting massive stress on our environment. Ironically, big tech companies – the ones that once boasted about their sustainability – have recently started backsliding on those goals as they invest more in fossil fuels to meet AI’s voracious demands. AI has even breathed life back into the fossil fuel industry, prompting companies to pump more money into coal and gas to power their ever-expanding server farms.
Read more on the environmental toll of AI, here: Can AI Really Help the Environment Enough to Outweigh Its Own Impact?
The AI Obsession: When Will We Snap Out of It?
So why do we remain so captivated by AI? Part of it is the novelty factor, the allure of the “cutting edge”. But much of it is marketing hype, promoting AI as a futuristic miracle that will save us all. But what if, in reality, we’re just producing a vast ocean of unoriginal content that nobody really wants or needs? There is potential for AI to be a force for good, but that’s rarely its focus right now. Instead, we’re caught in a loop, churning out increasingly meaningless content and consuming vast resources to do it.
The AI bubble won’t last forever. As people grow tired of low-quality, repetitive, and often misleading content, the reality of AI’s limitations will become harder to ignore. Just as with every tech trend that overpromises, there will come a tipping point. For AI, that burst will be long overdue. And when it happens, there’s hope that the innovative, creative, and thought-provoking elements of human ingenuity – the parts that AI can never replicate – will continue to thrive. May we rediscover the value of authentic, human-driven expression and realise that the people part of creation is what truly makes it meaningful.