Article
Strategy

A risk-based framework for AI use by sustainability teams: govern AI by use

Corporate sustainability connects directly to compliance, disclosure, and claims. Artificial intelligence (AI) is an enabling tool in these areas, but without effective risk management and governance, AI can create business and balance-sheet risks. Sustainability teams need a practical way to capture AI’s benefits without compromising disclosure integrity or exposing the business to new liabilities.

The stakes have changed

Sustainability is now included in a growing body of mandatory regulations, including California’s Climate Corporate Data Accountability Act (Senate Bill 253) and Climate-Related Financial Risk Act (Senate Bill 261), EU’s Corporate Sustainability Reporting Directive (CSRD), and packaging Extended Producer Responsibility (EPR) laws across several U.S. states. These regulations require sustainability data that used to be voluntary. Now their disclosure of corporate greenhouse gas (GHG) emissions and other sustainability metrics is required with limited assurance audits.

At the same time, AI adoption is accelerating. AI can speed up calculations, data ingestion, narrative drafting, and horizon scanning. Used well, they cut project time from weeks to days. On the other hand, the same tools can generate fabricated emission factors, phantom citations, and unsubstantiated claims that can lead to non-compliance, failed assurance, or trigger greenwashing exposure.

As sustainability shifts from a voluntary commitment to a legal obligation and AI use accelerates in parallel, generic enterprise AI policies are no longer sufficient. Sustainability functions need governance tailored to the specific risks of regulated, assured, and publicly disclosed information.

Four AI risks in corporate sustainability

While there are a range of potential risks with AI use in corporate sustainability work, a few that rise to the top connect to legal risks and exposure.

Data quality. Models can produce confident, plausible numbers that do not exist in authoritative databases. Issues range from fabricated emission factors to incorrect activity-data estimates. A greenhouse gas (GHG) emissions inventory built on these issues cannot be assured.

Research accuracy. AI can invent, misstate, misquote, or misinterpret statutes and regulations. The result is a missed requirement or a misreport with direct non-compliance exposure.

Greenwashing exposure. AI-drafted marketing copy and disclosure narratives can overstate progress, conflate ambition with achievement, and obscure baselines—creating both reputational damage and legal exposure under FTC Green Guides, EU directives on environmental claims, and similar regimes.

Audit trail gaps. Assured disclosure requires traceability and documentation of quantification methods, data sources, and assumptions, with more expectations emerging. For AI, this could include new information such as model, sources, and human review.

AI can produce reliable, useful output. But, because the same workflow can also produce any of the risks above, a fit-for-purpose risk management and governance approach is essential.

AI governance for sustainability

AI risk management and governance in sustainability integrate into existing business processes and particularly within AI uses. Key elements for this include understanding the uses of AI and rating that risk, developing controls proportional to the risk, and establishing roles for oversight, review, and approval.

Risk Rating. Sustainability managers should inventory each AI use case in their function and assign it a risk level (e.g., high, medium, low) based on its impact on disclosure, assurance, and public claims. A high-risk use, such as scope 1, 2, and 3 calculations for SB 253 reporting, would require audit-grade controls, while a lower-risk use, such as horizon scanning, internal training, and document search run on sanctioned tools, would have lower-level controls.

Controls. Managers should match controls to each risk level. Common controls include approved tools (enterprise vs. public models), primary-source grounding, data-confidentiality requirements, mandatory human review, and standardized documentation of inputs and outputs.

Roles. Define accountability across the team: who oversees adherence to the policy, who performs review and internal audit, and who holds final sign-off authority for assured disclosures. Pair role definition with training so that everyone using AI in the function understands the controls that apply to their work.

Resources to help guide governance development and execution include the National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF 1.0), ISO/IEC 42001:2023, OECD AI Principles, the EU AI Act, emerging state laws, and disclosure regimes.

Because AI itself carries sustainability impacts, notably energy and water use in training and inference, sustainability teams should also help shape a responsible-AI-use policy that addresses those footprints alongside the integrity controls above. 

The bottom line

AI is already being used by sustainability teams, and that use will continue to grow. Acting early to put governance in place turns AI from a risk multiplier into a productivity multiplier, preserving the integrity that mandatory disclosure and assurance now demand.

Sustainability leaders who rate risk levels by use, match controls to that risk, follow the necessary audit trail, and establish roles for oversight, review, and approval will reduce their risk exposure and help set the standard for the field.

Pure Strategies has built and implemented sustainability programs since 1998. To scope an AI governance sprint, contact us at purestrategies.com.