ESG Reporting, AI, and the Risk of Trusting Automation Too Much
As companies rely more heavily on AI for sustainability reporting, experts warn that unchecked automation can introduce errors, undermine credibility, and increase disclosure risk.
Sustainability Reports, ESG, and the Art of Not Trusting Anyone Completely
With the beginning of 2026, many companies enter the most intensive phase of their annual reporting cycle — both for financial results and for integrated ESG disclosures, often referred to as sustainability reports. These processes are closely connected: while financial reporting focuses on numbers, corporate HSE and ESG functions are primarily responsible for non-financial disclosures that explain how those numbers are achieved.
This article focuses on sustainability reporting and combines two perspectives: hands-on ESG reporting and assurance experience, and practical work with data collection and multi-stage verification processes, including AI-assisted workflows.
Sustainability reports are built on a wide range of non-financial data, defined through materiality assessments. They typically include information on target achievement, action plans, health and safety performance, environmental impacts, workforce metrics, and governance practices. Depending on company size and scope, such reports can easily exceed 200 pages.
Why ESG Reporting Is an Ideal Use Case for AI
There is little flexibility in simplifying ESG disclosures. Even the core set of requirements under the Global Reporting Initiative (GRI) framework is extensive — roughly 90 pages for the basic standards alone. When universal, topic-specific, and sector standards are combined, the total volume of requirements reaches several hundred pages.
No sustainability report of this scale is prepared by a single author. Each applicable standard is usually assigned to a subject-matter expert (SME) who collects data, analyses results, and prepares narrative disclosures within their area of responsibility.
This is where generative AI fits naturally into the process. AI can process large volumes of textual and numerical data, structure disclosures, summarize results, and support drafting activities with significant gains in speed and consistency. With appropriate prompts, curated knowledge bases, and customized models, AI can effectively support ESG reporting workflows.
The Risk of Unquestioned Trust
However, the very efficiency of AI introduces a critical risk: uncritical reliance on automated output. ESG reports combine regulatory references, technical claims, and public commitments. Errors in such documents — whether numerical, factual, or contextual — can undermine stakeholder trust and damage credibility.
A widely discussed case in October 2025 illustrated this risk clearly. A global consulting firm presented a client report containing fabricated sources, fictional legal cases, and invented quotations. The client was not informed that AI had been used in preparing the document, and the issue became apparent only after the errors were identified. While the direct financial consequences were limited, the reputational impact was not.
Still, humanity moves forward.
Generative AI has existed since the 1960s, and today its use has become mainstream. It is widely applied both for everyday tasks and in corporate environments.
According to the Wharton Human–AI Research report, “Gen AI is becoming deeply integrated into modern work: 82% of enterprise leaders now use Gen AI weekly, and 89% believe Gen AI augments work.” This trend does not bypass ESG. According to data published by Wavestone on October 28, 2025 , “59% of organizations are already leveraging AI for ESG measurement and communication.”
The use of AI in ESG reporting is growing particularly fast. PwC reported in its Global Sustainability Reporting Survey 2025 that “the use of AI for sustainability reporting almost tripled to 28%, from 11% last year.” The most common AI use cases include drafting and summarizing disclosures, identifying risks and opportunities, and collecting, integrating, and validating data from multiple systems.
At the same time, according to InfiFina.com , “only 27% of organizations review all AI-generated content before using it.” This means that 73% of companies publish AI-generated content with minimal checking.
This is where the statistics start to sound slightly dramatic.
Stakeholders tend to trust the information published by companies. If both internal specialists and external consultants stop thoroughly verifying public ESG disclosures, stakeholders may be misled — even without malicious intent.
How ESG Assurance Has Changed
Companies often engage consultants to provide assurance of ESG reports. In 2018, assurance work focused mainly on numerical data — calculations, tables, and statistics. By 2024, the scope had expanded to include narrative content, sometimes down to individual sentences, making the process significantly more labor-intensive.
In practice, SMEs were often required to explain highly specialized topics, such as biodiversity protection, simply to enable assurance, while responsibility for any errors still remained with them. This situation is common when consulting firms assign junior staff or primarily financial auditors to complex ESG reports. In such cases, a well-prepared in-house AI can generate technical substantiation and references, resolving a large share of routine auditor inquiries where deep domain expertise is limited.
From this perspective, it is encouraging that modern AI systems can now assess logic, meaning, and plausibility in specialized texts and flag questionable statements — even when the original draft was AI-generated. Properly trained AI systems can also be reused beyond a single report, supporting analyses, presentations, working documents, and long-term ESG planning.
A Practical Verification Approach
That said, AI-generated results also require verification — ideally in several stages. In our view, the strongest approach is a combined one: AI and humans working together, in multiple review cycles.
A Simple Review Algorithm for ESG Reports (AI + Humans)
1. AI checks itself.
2. Data is verified against source documents.
3. References and claims are manually verified.
4. A full human readthrough is performed.
Let’s go through each step in order.
1 AI checks itself. The process starts with AI self-checking. AI can be asked to review its own output, sometimes more than once. This works best when information is checked in smaller sections, as AI may unexpectedly modify the original text.
AI can identify missing fields, incorrect numbering, broken references to tables and figures, logical contradictions, mismatches between tables and narrative text, and obvious errors in units or totals. The result is usually a list of issues and clarification questions.
2. Data is verified against source documents. During report preparation, companies typically compile a comprehensive audit trail containing invoices, energy monitoring logs, HR and HSE data systems, and results of internal assessments. Each data source has a designated owner responsible for its accuracy and substantiation. AI can compare figures in the ESG report with the source documents and highlight discrepancies or missing references. Humans then confirm accuracy and make corrections where needed.
3. The next step is manual verification of references and claims. AI is known to invent scientific articles, authors, titles, and quotes. Even when links are provided, the real source may differ in title, author, year, or content. Humans must check that links work, that sources exist, and that cited statements are accurate and current. Legal and regulatory references also require manual confirmation.
Additional attention should be paid to strong statements and generalizations such as “fully compliant”, “significantly reduced”, or “best in class”. AI logic may fail here, and such claims require careful professional judgement.
4. The final step is a complete human read-through. The report should read logically as a whole. No new contradictions should appear after edits. Executive summaries should align with tables, charts, and figures.
It is also important to inform stakeholders about the use of AI in the preparation of ESG disclosures. This is a matter of transparency and openness, not an admission of lower quality — especially when responsible verification processes are in place. Human-prepared reports may also contain errors.
Responsibility Remains Human
Beyond data accuracy, ESG reports may include issues related to images, maps, charts, and photographs. These materials must be checked for copyright compliance, usage rights, and contextual accuracy. Photos of non-public individuals may require consent. Images of nature and wildlife should correspond to real habitats and geographic regions. AI often makes mistakes in species recognition and geographic context. Architectural images also require checks for attribution and usage restrictions.
One real example illustrates this clearly. In a sustainability report of a company operating in the Arctic, one of the authors once noticed images of Antarctic animals. Cute ones — but from the opposite end of the planet. Not all stakeholders will notice this. But those who do may lose trust in the company’s public statements and its ability to identify and manage risks. Regardless of whether the report was prepared by humans or with the help of AI.
AI processes text and numbers faster than humans. But it still makes mistakes. It will likely make fewer mistakes over time, but no one can say how soon or under what conditions. We know that people make mistakes. Now we also know that AI does too.
Without solid subject-matter expertise and careful human review, ESG reports cannot be prepared or validated reliably — whether written by humans or assisted by AI. Multiple layers of review exist for one reason: to reduce the risk of inaccurate and misleading disclosures.
AI will continue to improve, but responsibility for its use remains human. ESG reports are produced for real stakeholders who expect accountability in public statements, not just in operations.
A slightly ironic closing note. While researching material for this article, one of the authors learned that an AI system once hired a human to solve a CAPTCHA for it.
Apparently, even AI knows when it needs human help.
About the Authors
Olga Bodiagina is an expert in Environmental, Safety and Health (EHS) management systems implementation and support with more than 18 years of experience. She is a high proficiency of EHS trainer, including in behavior based programs trainings like CARE and SMAT. For the past 10 years, she has been working for International branches of US companies, first for KBR, then for Otis, then for KBR again. She can be reached at [email protected].
Irina Fitzgerald is an environmental and sustainability professional with 10+ years of experience in full-cycle extractive and processing companies within the oil & gas and metals & mining sectors. She has represented business interests at UN Climate and Biodiversity conferences (COP16, COP27, COP28). Irina specializes in implementing international standards (IFC, ISO, IRMA, GRI, GHG Protocol), ESG reporting, and stakeholder engagement. Her expertise also includes developing biodiversity conservation and climate resilience strategies, specifically for Arctic regions. Over the past decade, she has worked for global leaders such as TotalEnergies, NOVATEK, and Nornickel, managing collaborations with the UN Global Compact, Protected Area administrations, and scientific institutions. She can be reached at email [email protected].

