Survey
Global Investor Survey 2024
Discover key insights from PwC’s Global Investor Survey 2024. Explore trends, challenges and opportunities shaping investments worldwide.

Welcome to the age where artificial intelligence, economic and sustainability drivers converge to reshape how businesses operate and build trust. Where AI—including GenAI—is emerging as a catalyst for innovation and business performance transformation. Imagine AI-driven systems that not only boost your operations but also provide real-time business insights that help you make smarter, more responsible decisions. This is not just a vision—it is a new reality.
We see investors paying close attention to companies' AI strategies—and to the associated costs, the anticipated benefits and the governance mechanisms these entail. PwC’s Global Investor Survey 2024 reveals that 73% of the respondents think the companies they invest in/cover should increase their actions to deploy AI solutions at scale. They expect this will result in significant economic gains in productivity, revenue growth and profitability.
In this evolving landscape, where expectations are as high as the stakes, companies must navigate the dual reality of AI. The power of AI to enhance business performance is undeniable, but it comes with risks to be managed. Herein lies the critical role of transparency. As corporate leaders embed AI into their operations, they must simultaneously lead with accountability and articulate their strategies with clarity.
This article explores how companies can approach transparency over their use of AI to tell their own story.
Despite high expectations, a confidence gap among CEOs is evident. While 49% of CEOs anticipate profitability gains from generative AI, only about one-third express full confidence in its deployment. At the same time, when making strategic decisions, 76% of CEOsOpens in a new window cite ‘making decision criteria transparent’ as the best practice—underscoring the importance of proactively communicating an organisation’s AI strategy.
Transparency over AI use must reflect how leadership makes decisions—especially as AI reshapes, rather than replaces, human judgment. If leaders are not already in the habit of articulating how decisions are made, adopting AI could lead to a lack of transparency. This will make it harder for stakeholders to assess accountability or unexpected outcomes.
PwC’s Global investor Survey 2024 has shown that investors are closely watching how AI investments are delivering tangible value—like productivity, profitability and cost savings—while remaining alert to risks, including workforce impacts, regulatory compliance and environmental effects. Moreover, investors are increasingly scrutinising the quality and transparency of a company’s governance practices, viewing them as critical indicators of long-term sustainable value.
“Beyond financial performance, investors place high value on detailed disclosures related to corporate governance—including oversight, risk management, controls, and ethics.”
In today’s corporate landscape, transparency isn’t just a buzzword—it provides a mandate. Investors are not satisfied with vague references; they demand clear, comprehensive disclosures.
This demand for transparency over business use of AI is coming not just from investors but also from policymakers. Regulators worldwide are stepping in. The EU AI Act, for instance, requires businesses to adopt a risk-based approach, categorising AI systems into distinct risk levels. In the US, states, like Colorado and California, are leading the way with laws requiring internal governance and disclosures around consequential AI uses. In Asia, South Korea recently passed a wide-ranging national AI framework, while other countries such as China and Taiwan are developing targeted rules for generative and algorithmic technologies.
Companies have a clear imperative to get ahead and tell their own story. Annual reports and standalone governance disclosures are avenues for disclosing this information, but sustainability reports offer a forward-looking, governance-focused platform.
When companies consider AI initiatives alongside their sustainability goals, they can demonstrate how they are driving business model adjustments for long-term value creation. The strategic integration of AI innovation and sustainability considerations can position them as leaders in their sector, setting the pace for the market and industry reconfiguration.
Sustainability reporting frameworks such as the European Sustainability Reporting Standards and the IFRS Sustainability Disclosure Standards offer a structured avenue for disclosing AI strategies, governance, risks and opportunities. For example, the effects of AI on workforce dynamics, environmental impacts or broader communities can be reported on using the sustainability materiality lens. This will enable companies to demonstrate responsible oversight—reinforcing credibility and stakeholder trust.
For businesses developing or deploying AI at scale, pre-emptively integrating AI disclosures into sustainability reporting is not just a prudent move but places them ahead of the curve. This displays leadership in Responsible AI deployment, aligning with evolving expectations and underscoring a commitment to responsible innovation.
“Embedding transparency into AI governance is not merely a compliance issue—it is a leadership discipline.”
We’re already seeing companies recognise this. PwC’s early analysis of 250 CSRD reports shows that reporters are already voluntarily disclosing impacts, risks and opportunities (IROs) from AI use, cybersecurity, and data governance in their sustainability reports through entity-specific content.
A notable example of this shift comes from SAP, which identified Responsible AI as the most financially material topic in its 2024 SAP Integrated ReportOpens in a new window—reflecting its strategic role across products and services, and the associated risks around misinformation, human rights and regulatory exposure. The company then discloses dedicated governance structures, oversight mechanisms and AI-specific risk processes—demonstrating how Responsible AI can be addressed within a sustainability reporting framework.
Additionally, over 60% of investors say that clear and consistent disclosures aligned with reporting standards enhance their confidence in a company’s sustainability performance. This reinforces the role of sustainability reporting as a trusted channel for communicating how AI is governed and integrated across the business.
According to PwC's 2025 AI Business Predictions, artificial intelligence is expected to expedite the energy transition and generate substantial cost savings, thereby enabling organisations to become leaders in sustainable innovation.
While the opportunities AI presents could fill an entire article, let’s explore a few notable examples:
For further potential AI use cases, see Generative AI unleashed for sustainability.
AI holds tremendous potential, and it introduces new layers of complexity and risks to manage. For example, design vulnerabilities, data security issues, and potential societal and environmental impacts.
As with any transformative technology, organisations need to define their risk appetite and proactively plan for mitigation in pursuit of innovation, efficiency or scale. Effective governance demands a structured understanding of AI-related risks, clear accountability across functions and alignment with strategic priorities, including sustainability objectives.
Below, we explore examples of risks and how companies can assess these within their sustainability materiality assessments—considering impacts on both people and the environment, as well as risks or opportunities that could affect enterprise value.
By adopting a comprehensive framework for Responsible AI, businesses can effectively manage the dual potential of AI’s opportunities and risks within their corporate strategies. Companies that proactively navigate these challenges will stand out as pioneers, confidently harnessing AI's transformative power while upholding integrity and trust across their operations.
It’s clear AI presents companies with a transformative opportunity. Communicating how the associated dependencies/impacts, risks and opportunities are being managed enables companies to deepen stakeholder trust.
Consider these no-regret steps now to build stakeholder trust, as you pursue your AI adoption journey:
1. Leverage your actions categorising AI inventories for regulatory purposes
Use the insights from your AI risk categorisation work to enhance your sustainability reporting materiality assessments. Take this as a starting point to capture a detailed view of dependencies/impacts, risks and opportunities to be managed and embedded in core strategy, systems and processes.
2. Strengthen Responsible AI practices aligned with strategic goals
With a clearer understanding of your AI risks—strengthen your approach to Responsible AI. Develop and implement a robust framework for Responsible AI that encompasses oversight, transparent decision-making, proactive risk management, and targeted upskilling for workforce readiness. Embed these practices within your broader corporate strategy to mitigate risks, respond to emerging regulatory trends and drive value creation, thereby enhancing stakeholder trust. Explore further in AI rewrites the playbook: Is your business strategy keeping pace?
3. Enhance your corporate reporting to tell your own story
Tie factual data to engaging narratives. Utilise both quantitative metrics (such as energy savings and emissions reduction) and qualitative insights. By considering disclosures in your corporate reporting—be it your annual report or sustainability report or integrated report—you can paint a more rounded picture of your organisation’s performance. This will reinforce stakeholder trust and leadership in your sector.
As AI continues to reshape how businesses operate, transparency becomes essential—not just to meet external expectations but to guide internal decision-making. Sustainability reporting provides a credible, structured avenue to disclose both how AI use in your business is governed and how it’s integrated into the business to support long-term value creation.
The authors wish to thank Superna Khosla, Monika Jonce, Caitlin McDonald, Ilana Golbin Blumenfeld, Monika Januszkiewicz, Brigham McNaughton, Kazi Islam and Joe Atkinson for thoughtful input and guidance throughout.
AI, climate change, and geopolitics are shifting economies. Discover the next decade's value hotspots to future-proof your business.
If we would make one prediction to sum up all the rest, it would be this: Your company’s AI success will be as much about vision as adoption.
Survey
Discover key insights from PwC’s Global Investor Survey 2024. Explore trends, challenges and opportunities shaping investments worldwide.
Data
Investors and CEOs have differing perspectives on the technology, but both groups agree on the need for greater transparency.
Reinvention on the edge of tomorrow
The EU AI Act aims to set rules for AI systems and promote innovation. What matters when it comes to implementation in the company.