AI and transparency: A new age of corporate responsibility

AI and Transparency
  • Insight
  • 8 minute read
  • May 29, 2025

By Nadja Picard and Jennifer Kosar

Welcome to the age where artificial intelligence, economic and sustainability drivers converge to reshape how businesses operate and build trust. Where AI—including GenAI—is emerging as a catalyst for innovation and business performance transformation. Imagine AI-driven systems that not only boost your operations but also provide real-time business insights that help you make smarter, more responsible decisions. This is not just a vision—it is a new reality.

We see investors paying close attention to companies' AI strategies—and to the associated costs, the anticipated benefits and the governance mechanisms these entail. PwC’s Global Investor Survey 2024 reveals that 73% of the respondents think the companies they invest in/cover should increase their actions to deploy AI solutions at scale. They expect this will result in significant economic gains in productivity, revenue growth and profitability.

In this evolving landscape, where expectations are as high as the stakes, companies must navigate the dual reality of AI. The power of AI to enhance business performance is undeniable, but it comes with risks to be managed. Herein lies the critical role of transparency. As corporate leaders embed AI into their operations, they must simultaneously lead with accountability and articulate their strategies with clarity.

This article explores how companies can approach transparency over their use of AI to tell their own story.

The power of transparency

Despite high expectations, a confidence gap among CEOs is evident. While 49% of CEOs anticipate profitability gains from generative AI, only about one-third express full confidence in its deployment. At the same time, when making strategic decisions, 76% of CEOs cite ‘making decision criteria transparent’ as the best practice—underscoring the importance of proactively communicating an organisation’s AI strategy. 

Transparency over AI use must reflect how leadership makes decisions—especially as AI reshapes, rather than replaces, human judgment. If leaders are not already in the habit of articulating how decisions are made, adopting AI could lead to a lack of transparency. This will make it harder for stakeholders to assess accountability or unexpected outcomes. 

PwC’s Global investor Survey 2024 has shown that investors are closely watching how AI investments are delivering tangible value—like productivity, profitability and cost savings—while remaining alert to risks, including workforce impacts, regulatory compliance and environmental effects. Moreover, investors are increasingly scrutinising the quality and transparency of a company’s governance practices, viewing them as critical indicators of long-term sustainable value.

“Beyond financial performance, investors place high value on detailed disclosures related to corporate governance—including oversight, risk management, controls, and ethics.”

PwC’s Global Investor Survey 2024

In today’s corporate landscape, transparency isn’t just a buzzword—it provides a mandate. Investors are not satisfied with vague references; they demand clear, comprehensive disclosures. 

This demand for transparency over business use of AI is coming not just from investors but also from policymakers. Regulators worldwide are stepping in. The EU AI Act, for instance, requires businesses to adopt a risk-based approach, categorising AI systems into distinct risk levels. In the US, states, like Colorado and California, are leading the way with laws requiring internal governance and disclosures around consequential AI uses. In Asia, South Korea recently passed a wide-ranging national AI framework, while other countries such as China and Taiwan are developing targeted rules for generative and algorithmic technologies. 

Companies have a clear imperative to get ahead and tell their own story. Annual reports and standalone governance disclosures are avenues for disclosing this information, but sustainability reports offer a forward-looking, governance-focused platform.  

When companies consider AI initiatives alongside their sustainability goals, they can demonstrate how they are driving business model adjustments for long-term value creation. The strategic integration of AI innovation and sustainability considerations can position them as leaders in their sector, setting the pace for the market and industry reconfiguration.

Sustainability reporting frameworks such as the European Sustainability Reporting Standards and the IFRS Sustainability Disclosure Standards offer a structured avenue for disclosing AI strategies, governance, risks and opportunities. For example, the effects of AI on workforce dynamics, environmental impacts or broader communities can be reported on using the sustainability materiality lens. This will enable companies to demonstrate responsible oversight—reinforcing credibility and stakeholder trust.

For businesses developing or deploying AI at scale, pre-emptively integrating AI disclosures into sustainability reporting is not just a prudent move but places them ahead of the curve. This displays leadership in Responsible AI deployment, aligning with evolving expectations and underscoring a commitment to responsible innovation. 

“Embedding transparency into AI governance is not merely a compliance issue—it is a leadership discipline.”

We’re already seeing companies recognise this. PwC’s early analysis of 250 CSRD reports shows that reporters are already voluntarily disclosing impacts, risks and opportunities (IROs) from AI use, cybersecurity, and data governance in their sustainability reports through entity-specific content. 

A notable example of this shift comes from SAP, which identified Responsible AI as the most financially material topic in its 2024 SAP Integrated Report—reflecting its strategic role across products and services, and the associated risks around misinformation, human rights and regulatory exposure. The company then discloses dedicated governance structures, oversight mechanisms and AI-specific risk processes—demonstrating how Responsible AI can be addressed within a sustainability reporting framework.

Additionally, over 60% of investors say that clear and consistent disclosures aligned with reporting standards enhance their confidence in a company’s sustainability performance. This reinforces the role of sustainability reporting as a trusted channel for communicating how AI is governed and integrated across the business.

AI as a performance enabler 

According to PwC's 2025 AI Business Predictions, artificial intelligence is expected to expedite the energy transition and generate substantial cost savings, thereby enabling organisations to become leaders in sustainable innovation. 

While the opportunities AI presents could fill an entire article, let’s explore a few notable examples: 

Organisations are increasingly turning to AI-driven solutions to optimise energy use. AI is being used to forecast demand and build resilience against disruptions such as grid outages and supply dependencies. By analysing complex consumption patterns in real time, AI systems can also dynamically adjust operations, minimising energy waste. This dual benefit not only slashes operational costs but also contributes to a reduced carbon footprint, a crucial factor in today’s low-carbon economy.

GenAI can create efficiencies by freeing up employees to focus on high-value, strategic work. This shift drives productivity across the board and builds capacity for broader transformation. As AI becomes embedded in operations, targeted upskilling strengthens workforce adaptability. This will enhance organisational resilience and position firms to navigate both technological and environmental challenges.

Enhanced data analytics powered by AI enables companies to monitor and optimise their supply chains like never before. Technologies, such as real-time tracking, predictive analytics, and digital twins—a real-time interactive virtual replica of the supply chain—support better decision-making and can help reduce waste, improve coordination, and enable responsible sourcing. This not only strengthens supply chain resilience but also bolsters a company’s sustainability credentials, which can be a competitive differentiator. 

For further potential AI use cases, see Generative AI unleashed for sustainability

AI as a challenger 

AI holds tremendous potential, and it introduces new layers of complexity and risks to manage. For example, design vulnerabilities, data security issues, and potential societal and environmental impacts.

As with any transformative technology, organisations need to define their risk appetite and proactively plan for mitigation in pursuit of innovation, efficiency or scale. Effective governance demands a structured understanding of AI-related risks, clear accountability across functions and alignment with strategic priorities, including sustainability objectives. 

Below, we explore examples of risks and how companies can assess these within their sustainability materiality assessments—considering impacts on both people and the environment, as well as risks or opportunities that could affect enterprise value. 

The inherent complexity of modern AI systems creates significant gaps in understanding how these technologies work—often referred to as the ‘black box’ problem. This lack of visibility becomes particularly concerning when companies rely on third-party foundational models, inheriting design choices and training data—which may carry embedded stereotypes or biased assumptions—without the ability to fully interrogate or understand them. As a result, organisations may unknowingly expose themselves to risks such as biased decision-making, discrimination, misinformation or unintended social harm. This can affect both stakeholder trust and regulatory exposure. 

Legal ambiguity also arises when AI systems are developed and adapted by multiple parties, creating unclear accountability across the value chain. This heightens the risk of non-compliance, liability, or reputational harm, particularly in sensitive areas such as hiring, lending or public services.

Additionally, processing large volumes of sensitive data increases exposure to cybersecurity threats. Breaches or adversarial attacks—such as data poisoning—can compromise the integrity of AI outputs. This not only raises compliance risks under regulations like the EU AI Act or General Data Protection Regulation (GDPR) but also risks misinforming high-stakes decisions and eroding stakeholder trust.

The infrastructure required to develop, train and host AI models brings operational risks—such as energy availability, regional climate vulnerabilities and supply chain disruption—alongside environmental impacts. 

High-performance computing demand drives energy and water consumption. One estimate suggests that in 2027, AI-related infrastructure could consume six times the annual water use of Denmark. With half the global population living in water-scarce areas, there is growing expectation that organisations will begin to include these resource impacts in their sustainability materiality assessments, particularly where data centres and supply chains intersect with vulnerable ecosystems.  

AI’s hardware production also drives demand for critical minerals, which can contribute to unsustainable mining practices and resource depletion. At the same time, frequent infrastructure upgrades generate large volumes of e-waste, increasing the risk of environmental contamination and disruption to natural habitats. 

The deployment of AI across products, services and internal operations introduces a range of business risks. Poor oversight, untested models or misaligned use cases can lead to flawed decision-making, compliance breaches or reputational harm.

Insufficient workforce reskilling initiatives can amplify risk exposure—from ineffective use to misalignment between AI capabilities and organisational needs. This may lead to job displacement, skill mismatches and growing pressure on vulnerable worker segments. 

By adopting a comprehensive framework for Responsible AI, businesses can effectively manage the dual potential of AI’s opportunities and risks within their corporate strategies. Companies that proactively navigate these challenges will stand out as pioneers, confidently harnessing AI's transformative power while upholding integrity and trust across their operations. 

Business actions: Transparency in AI for sustainable growth

It’s clear AI presents companies with a transformative opportunity. Communicating how the associated dependencies/impacts, risks and opportunities are being managed enables companies to deepen stakeholder trust.

Consider these no-regret steps now to build stakeholder trust, as you pursue your AI adoption journey:

1. Leverage your actions categorising AI inventories for regulatory purposes  

Use the insights from your AI risk categorisation work to enhance your sustainability reporting materiality assessments. Take this as a starting point to capture a detailed view of dependencies/impacts, risks and opportunities to be managed and embedded in core strategy, systems and processes.  

2. Strengthen Responsible AI practices aligned with strategic goals

With a clearer understanding of your AI risks—strengthen your approach to Responsible AI. Develop and implement a robust framework for Responsible AI that encompasses oversight, transparent decision-making, proactive risk management, and targeted upskilling for workforce readiness. Embed these practices within your broader corporate strategy to mitigate risks, respond to emerging regulatory trends and drive value creation, thereby enhancing stakeholder trust. Explore further in AI rewrites the playbook: Is your business strategy keeping pace?

3. Enhance your corporate reporting to tell your own story 

Tie factual data to engaging narratives. Utilise both quantitative metrics (such as energy savings and emissions reduction) and qualitative insights. By considering disclosures in your corporate reporting—be it your annual report or sustainability report or integrated report—you can paint a more rounded picture of your organisation’s performance. This will reinforce stakeholder trust and leadership in your sector.


As AI continues to reshape how businesses operate, transparency becomes essential—not just to meet external expectations but to guide internal decision-making. Sustainability reporting provides a credible, structured avenue to disclose both how AI use in your business is governed and how it’s integrated into the business to support long-term value creation.

The authors wish to thank Superna Khosla, Monika Jonce, Caitlin McDonald, Ilana Golbin Blumenfeld, Monika Januszkiewicz, Brigham McNaughton, Kazi Islam and Joe Atkinson for thoughtful input and guidance throughout.

About the authors

Nadja Picard
Nadja Picard

Global Reporting Leader, PwC Germany

Jennifer Kosar
Jennifer Kosar

AI Assurance Leader, PwC United States

Value in motion

AI, climate change, and geopolitics are shifting economies. Discover the next decade's value hotspots to future-proof your business.

2025 AI Business Predictions

If we would make one prediction to sum up all the rest, it would be this: Your company’s AI success will be as much about vision as adoption.