The Environmental Cost of Large Language Models: Is It Worth It?

The development of large language models (LLMs) has undoubtably changed how we process and interact with information, but this technological novelty comes at an immense environmental cost. From carbon emissions to water usage, the question lingers: are the benefits of LLMs worth their environmental burden? When we weigh their limitations against their resource intensity, the balance is hard to justify.


The Cost of Training LLMs

Training LLMs is a resource-intensive process. These models require vast datasets and immense computational power to achieve their capabilities. Each training session involves thousands of high-powered servers running for weeks or months on end, consuming enormous amounts of electricity. The result? A substantial carbon footprint, often comparable to that of entire small nations.

Cooling these server farms further compounds the issue. To prevent overheating, data centres consume staggering quantities of water—millions of litres annually—diverting resources from local communities and ecosystems. This environmental toll escalates as companies race to refine their models for better accuracy.

More on AI and water usage to come.


Recycling Data, Not Creating Knowledge

Despite their vast training efforts, LLMs don’t generate new knowledge. Instead, they recycle existing data, pulling from what’s already been written or said. Their responses, while sometimes insightful, are constrained by the limitations of their training data.

This recycling effect isn’t without consequence. The more these models are trained, the better they become at mimicking human conversation and thought—but they remain fundamentally flawed. Hallucinations (when an LLM produces false or nonsensical information) and inaccuracies are a persistent issue. No amount of additional training can eliminate these errors because the long tail of possible prompts is infinite.


The Economic Driver Behind Rapid Advancement

The rapid development of LLMs is driven by profit rather than necessity. Tech companies have discovered that rehashing existing content with AI can be immensely lucrative. From automated content creation to advanced customer support tools, the applications of LLMs generate billions in revenue.

This profitability creates a cycle: demand for better models fuels more frequent training, which in turn exacerbates the environmental cost. While companies publicly commit to sustainability, many are quietly ramping up their reliance on fossil fuels to meet the energy demands of LLM training.


Broken Sustainability Promises

Tech giants once championed sustainability as a core value, but recent actions suggest otherwise. Reports indicate that some companies are backtracking on renewable energy commitments, turning to coal and natural gas to power their growing AI operations.

These practices clash with the promises of a greener future. The contradiction is stark: the very companies that pledged to combat climate change are now contributing significantly to its acceleration.


The Cycle of Consumption

LLMs don’t just recycle content—they enable a broader cycle of consumption. By repackaging existing knowledge, these models feed our insatiable appetite for instant answers. This convenience comes at a cost: the more we rely on AI-generated content, the less incentive there is to create genuinely new knowledge.

This pattern reinforces itself. As LLMs churn out recycled content, the demand for them grows, driving further investment, training, and environmental strain. Meanwhile, the emphasis on profitability ensures that these models are optimised for maximum return rather than minimal impact.


Large language models hold promise, but their environmental cost demands a critical re-evaluation of their role in our society. Are they tools of genuine progress, or are they merely profit-driven engines accelerating environmental harm? Until the industry can balance innovation with sustainability, the true value of LLMs will remain in question.

Previous
Previous

5 Common Misconceptions About Qualitative Data Analysis Debunked

Next
Next

Are LLMs Just Recycling the Internet?