A recent Ecohustler article on energy-efficient AI focused, sensibly and practically, on a tactical move: smarter infrastructure. Decentralised, edge-based approaches reduce long-distance data movement and avoid hyperscale over-provisioning, to cut energy use dramatically while remaining commercially viable. That’s an important proof point for environmentalists: sustainability and competitiveness don’t have to be in conflict.
Another way to optimise infrastructure tactically is to locate data centres in colder climates. This reduces the need for energy-intensive and water-intensive cooling. In colder climates there are also opportunities for heat recovery and reuse (such as using data center heat to supply nearby district heating networks) and often better access to renewable energy.
However, current infrastructure, even optimised and with recovery where possible, still pumps large amounts of residual heat into the atmosphere. For a better understanding of AI’s environmental impact, we need to look both deeper (into the physics of computation) and wider (into how AI reshapes human productivity and energy use across society). Then we need to consider the human systems of which AI is already an integral part.
Looking deeper: the physics of green AI
SUPPORTED BY HEROES LIKE YOU
Support independent eco journalism that drives real change.Most current approaches to green AI are based on an assumption that computation inevitably consumes energy, so the goal is to minimise how much is used. But at a fundamental level this assumption simply isn’t true.
Almost all modern software is designed to rely on clearing memory, overwriting data, and discarding temporary results to stay simple and fast. As a result, current microprocessors constantly erase information. This produces heat, so most real-world computation today continually wastes energy.
This isn’t a design flaw - it’s a historical choice, rooted in decades of engineering for speed and simplicity rather than efficiency and sustainability. 20th century computing design philosophy assumed that information loss is cheap — an assumption that no longer holds in an energy-constrained, climate-aware world. Now, as AI workloads scale, that hidden thermodynamic cost is coming ever more into focus.
Emerging work in reversible computing challenges this assumption. The idea is simple but startling: if computations are designed so that every step can be reversed, information isn’t destroyed, so the energy used to perform the computation doesn’t have to be lost as heat. In theory, immediately reversing every computational step allows energy to be recovered rather than dissipated as heat. Subject to practical limitations, this approach can almost eliminate energy wastage.
Reversible computing techniques for conventional computing are beginning to move into practice, and quantum computing is inherently reversible at the level of its core operations. It’s early days, and these approaches won’t replace today’s hardware overnight. However, the pressure to reduce wasted heat from AI and other computing at scale will eventually drive adoption. In future, the energy cost of computation will fall to near zero - not just by incremental percentages from optimisation, but because computing will itself become energy-efficient and stop wasting heat.
There are other costs associated with computing, of course, such as construction, transport, and maintenance of equipment. However, there’s another side to the energy equation altogether - one that’s often overlooked.
Looking wider: AI, productivity, and the energy cost of human time
In a nutshell, AI is (subject to concerns discussed below) better software than we have had to date. When we talk about its energy use, we usually focus on electricity consumed by data centres. But that’s only part of the picture - quite possibly a small part.
Human labour is highly energy-intensive. People working for hours or days to analyse data, draft reports, check compliance, or coordinate decisions consume energy indirectly through food, transport, buildings, heating, and healthcare. There may also be a cost that is harder to see - the impact of solution delay on the people and ecosystems affected by a problem, such as resources consumed by patients waiting for an operation or vehicles stuck in traffic. When AI resolves such problems in seconds, the direct compute energy can look large – but the total system energy may be far lower.
This reframes the debate. The real sustainability question isn’t “does AI use energy?”, but:
Does AI reduce the total energy required to achieve meaningful outcomes?
The answer is not at all clear. AI’s speed and ease can encourage overuse, trivialisation, and runaway demand. Efficiency gains can be eaten up by increased consumption if systems aren’t designed thoughtfully. But when AI genuinely replaces repetitive cognitive labour, the potential energy savings at a societal level are enormous.
The real step change here is not about replacing humans, but about making better use of the most energy-efficient machine ever created: the human brain.
The human brain runs on about 20 watts: about the same as a dim light bulb. Yet with that tiny power budget it sees, learns, imagines, decides, and creates meaning in a way no current computer system can match at comparable energy cost. That incredible efficiency is the result of how the brain is designed, how it evolved, and what it doesn’t waste energy on.
- The brain evolved under extreme energy constraints. For most of human history, calories were scarce. Any organ that wasted energy simply didn’t survive. So the brain evolved to do just enough computation to stay alive and adapt, not to calculate everything precisely or exhaustively. It prioritises what matters, ignores what doesn’t, and fills in gaps rather than analysing every possibility. In sustainability terms, the brain is a frugal, demand-led system, not a brute-force one.
- It computes only when it needs to. Unlike computers, the brain is not clocked at a fixed speed. Neurons fire only when something meaningful happens. If there’s no new information, large parts of the brain stay quiet. This is called event-driven processing, and it saves huge amounts of energy. Modern computers, by contrast, burn power continuously just to stay ready.
- Memory and processing happen in the same place. In today’s computers, data is constantly shuttled back and forth between memory and processors – an energy-hungry process. In the brain, memory and processing are inseparable. Each neuron both stores information and processes it. That local, distributed design dramatically reduces energy spent on moving data around. It’s the difference between carrying water across a city and using it where it falls.
- The brain accepts imperfection. The brain doesn’t aim for perfect accuracy. It aims for good-enough decisions made quickly. It works with probabilities, heuristics, and approximations. It tolerates noise. It revises its beliefs over time instead of recalculating everything from scratch. From an energy perspective, this is incredibly efficient. Precision is expensive. Wisdom often isn’t.
- It learns by changing itself, not by re-running everything. When the brain learns, it subtly adjusts connections between neurons. It doesn’t need to replay vast datasets or retrain itself from the ground up. That means learning is incremental and adaptive, not energy-intensive and repetitive – unlike many current AI systems, which require enormous retraining cycles.
The lesson from the human brain isn’t that we should try to copy it neuron by neuron. This approach was a key driver of today’s energy-inefficient AI systems, which implemented neural nets in ways that ignore the brain’s energy constraints. The true lesson is more subtle. Energy-efficient intelligence doesn’t try to do everything. Rather, it focuses, filters, and adapts.
If AI is to support a sustainable future, it needs to work with human intelligence, not compete with it. The real gains come from using machines to remove friction – searching, checking, integrating – so humans can apply judgement, creativity, and care. In climate terms, the real opportunity is to amplify the most efficient intelligence we already have, instead of sidelining it. AI’s real sustainability value lies in making better use of human judgement, creativity, and decision-making – not in trying (probably vainly) to automate everything humans are able to do.
Why trust is a sustainability issue
Taking this holistic stance helps us arrive at a critical realisation: untrustworthy AI is energy-inefficient AI.
When AI systems produce outputs that can’t be verified, explained, or audited, organisations compensate by adding layers of human checking, duplication, and risk management. That extra effort consumes time, energy, and resources – wiping out many of the efficiency gains AI promised in the first place. For AI to deliver net sustainability benefits, it must be fully trustworthy:
- Grounded in reliable, well-understood data
- Generated through transparent, explainable processes
- Delivered with clear provenance so results can be easily checked
Emerging AI platforms such as Dedoctive AgenticFlow are designed around this principle: inputs that are drawn only from properly validated materials, reasoning that is clearly compliant with best practices, and outputs that are amenable to verification and auditing. This isn’t just about ethics or compliance; it’s about avoiding waste – cognitive, organisational, and environmental. Trustworthy AI reduces the need for rework, dispute, and correction.
In sustainability terms, reducing server power draw is important - but working towards better AI, and using it better, may well be orders of magnitude more important.
Conclusion: A holistic picture of sustainable AI
For an environmentally concerned audience, the takeaway is this: energy-efficient AI isn’t a single solution, but a layered transition.
- Smarter infrastructure and edge computing are a short-term tactical manoeuvre to reduce waste today
- The emerging practice of reversible computing will in due course resolve deeper thermodynamic waste issues
- The remaining energy costs are best understood at a system level - especially when adoption of trustworthy AI enables human-centred workflows that dispense with the hidden inefficiencies arising from today’s unreliable AI responses
Seen this way, AI’s environmental impact isn’t primarily a technical infrastructure challenge. It’s a design, governance, and societal opportunity - or if you prefer, responsibility. If we get this right, AI could help us do more with less at a civilisational scale. If we get it wrong, it risks accelerating consumption without delivering real value.
The choice isn’t about whether to use AI – it’s about how thoughtfully we integrate it into human systems that already carry enormous energy and environmental weight.
What AI means for climate action
- Efficiency isn’t just about cleaner power Decarbonising electricity matters, but the biggest long-term gains come from reducing waste – in data movement, idle infrastructure, and even the physics of computation itself.
- AI can reduce total energy use – if it replaces real work When AI meaningfully cuts human time spent on repetitive cognitive tasks, it can lower overall energy consumption across transport, buildings, and organisational overhead. Used carelessly, it does the opposite.
- Trustworthy AI is climate-positive AI Systems that produce verifiable, auditable outputs reduce the hidden energy costs of rework, checking, and risk management. Transparency is not just ethical – it’s efficient.
- The next gains come from systems thinkingEdge computing, reversible computing, and human-centred AI design all point in the same direction: doing more with less by redesigning how intelligence is produced and used.
- Climate impact depends on choices, not just technology AI will amplify whatever incentives we build around it. Aligning it with climate goals requires governance, trust, and restraint as much as innovation.