Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner. Facebook Messenger An icon of the facebook messenger app logo. Facebook An icon of a facebook f logo. Facebook Messenger An icon of the Twitter app logo. LinkedIn An icon of the LinkedIn logo. WhatsApp Messenger An icon of the Whatsapp messenger app logo. Email An icon of an mail envelope. Copy link A decentered black square over a white square.

Generative AI could ‘supercharge’ climate disinformation, report warns

The Climate Action Against Disinformation coalition (CAAD) has warned that AI could turbocharge climate disinformation. (Tom Goode/PA)
The Climate Action Against Disinformation coalition (CAAD) has warned that AI could turbocharge climate disinformation. (Tom Goode/PA)

Artificial intelligence (AI) has the potential to “supercharge” climate disinformation, a report has warned.

The Climate Action Against Disinformation coalition (CAAD), which includes groups like Friends of the Earth, InfluenceMap and the Centre for Countering Digital Hate, published a report on Thursday that maps the risk AI poses in the climate crisis.

The research found that generative AI could escalate disinformation online, including climate-related deepfakes, during an election year when countries like the UK and the US head to the polls with environmental policies at the centre of some campaigns.

The report said unregulated AI “will allow climate deniers to more easily, cheaply and rapidly develop persuasive false content and spread it across social media, targeted advertising and search engines”.

The researchers cited the 2024 World Economic Forum in Davos, which identified AI-generated misinformation and disinformation as the world’s greatest threat, followed by climate change.

They also highlighted recent campaigns like wind power being falsely blamed as a cause of whale deaths in New Jersey and power outages in Texas.

“AI will only continue this trend as more tailored content is produced and AI algorithms amplify it,” the report said.

CAAD also found that the current policy landscape reveals a lack of regulation in the US and Europe, which rely on voluntary and opaque pledges to pause development or provide safety measures.

Researchers also highlighted the enormous amount of water and energy consumption that AI systems require to operate, as well as the speed at which consumption is growing.

They cited the International Energy Agency’s estimates that energy use from data centres that power AI will double in the next two years, consuming as much energy as Japan.

Oliver Hayes, head of policy and campaigns at Global Action Plan, said: “The climate emergency cannot be confronted while online public and political discourse is polluted by fear, hate, confusion and conspiracy.

“AI is supercharging these problems, making misinformation cheaper and easier to produce and share than ever before.

“In a year when two billion people are heading to the polls, this represents an existential threat to climate action.”

Charlie Cray, senior strategist at Greenpeace USA, said: “The skyrocketing use of electricity and water, combined with its ability to rapidly spread disinformation, makes AI one of the greatest emerging climate threat multipliers.

“Governments and companies must stop pretending that increasing equipment efficiencies and directing AI tools towards weather disaster responses are enough to mitigate AI’s contribution to the climate emergency.”

CAAD said it is calling on politicians to implement climate concerns into proposed AI legislation.

Recommendations in the report included AI companies publicly reporting on energy usage and emissions, being able to publicly demonstrate their products are safe for users and the environment, and to introduce rules on investigating and mitigating the climate impacts of AI with strong penalties for non-compliance.