
More than 200 laws have been developed to regulate AI in more than 100 countries. Many of them focus on issues such as privacy, bias, disinformation, security and cybersecurity rather than the environmental consequences of AI.
AI is an energy-intensive and thirsty industry. It leads to huge greenhouse gas emissions, pollution and loss of nature. These impacts arise partly from the manufacture and use of energy-, carbon- and water-intensive “complex computer chips”, called graphics processing units (GPUs), for the training of AI models as well as increasing e-waste.
My research into the regulatory responses to AI in the EU and the UK highlights how laws often ignore the environmental implications of this big tech. The lack of stringent obligation in AI law and policy is concerning.
There are environmental consequences at all stages of the AI lifecycle. From the manufacture of AI hardware, training of AI models, deployment and use of AI right through to the disposal of AI hardware.
The manufacture of components relies on the extraction of rare earth elements. This can contaminate soil and water, pollute the air and lead to loss of nature and forest habitats. Training AI models is incredibly energy- and water-intensive. A team of researchers estimated in 2025 that training GPT-3 – a large language model released by OpenAI in 2020 – consumed an estimated 700,000 litres of freshwater for electricity generation and cooling of data centres.
Even though AI models are becoming more energy efficient, as models become larger and AI proliferates, overall energy consumption and associated emissions are rising. And the energy consumed in the use of AI, including to generate text or images, vastly outweighs that used during training.
However, it’s difficult to accurately measure the environmental effects of AI, partly due to the lack of transparency of technology companies.
When the EU’s AI Act came into force on August 1 2024, it was the “world’s first comprehensive law” on AI. The AI Act acknowledges some of AI’s environmental consequences. It also requires that “AI systems are developed and used in a sustainable and environmentally friendly manner”.
It outlines that AI providers must disclose information on “known or estimated energy consumption data of the model”. But while promising, this information only needs to be provided when requested by the AI Office, which has been established within the European Commission.

sutthilak.c10/Shutterstock
Further measures include preparing codes of conduct to assess and minimise “the impact of AI systems on environmental sustainability”. But this is not compulsory. Overall, the AI Act is intentionally anthropocentric. It states that: “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human wellbeing.”
The UK has no AI-specific legislation. AI is currently only regulated by existing laws. The UK government’s 2023 white paper on AI regulation, which proposes a regulatory framework for AI, doesn’t prioritise sustainability at all. Although the white paper acknowledges that AI can contribute to technologies to respond to climate change, it does not specifically address any environmental risks:
The proposed regulatory framework does not seek to address all of the wider societal and global challenges that may relate to the development or use of AI. This includes issues relating to … sustainability. These are important issues to consider … but they are outside of the scope of our proposals for a new overarching framework for AI regulation.
A transparent future?
More transparency starts with AI developers having to disclose information about how much energy and water is consumed, how much carbon is emitted, the rare earth elements extracted and how much plastic is used during the AI production process.
This data then provides a baseline. Then appropriate targets and limits can be set for energy efficiency, carbon emissions and water use to improve the sustainability of AI.
Several proposals have been made for how reduced carbon emissions and water consumption could practically be achieved, such as training AI models on less carbon-intensive energy grids or in less water-intensive data centres.
Warnings about environmental effects could tell consumers how much carbon dioxide is emitted or water consumed for each query. In addition, an AI labelling system could mirror the EU’s existing energy efficiency labelling schemes, which clearly indicate the energy efficiency of appliances, ranking them from most energy-efficient (dark green) to least energy-efficient (red).
Proposals include an AI “energy star” rating system and a social and environmental certification system. This would help consumers to make informed choices about which AI systems to use or whether AI should be used at all. Tax incentives and funding incentives could also encourage tech firms to make more sustainable choices.
By integrating sustainability into AI laws, through these types of measures, the planet can be somewhat safeguarded alongside AI’s rapid expansion.
![]()
Louise Du Toit does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


