Executives need to give higher priority to the fast-evolving risks of generative AI. They can start with a few key trust-building actions.
When PwC’s 2023 Trust Survey asked 500 executives how they prioritised major risks that could erode trust in their company, the threats associated with AI fell well below other cyber-related ones like a data breach or a ransomware attack. The findings suggest that many business leaders have yet to grasp the urgency of the challenges that generative AI poses. To name just a few: offensive or misleading content; deepfakes intended to spread misinformation or urge stakeholders to share sensitive information; authoritatively presented information that’s wholly inaccurate; the exposure of anonymised stakeholders’ identity; content reproduced illegally from copyrighted material; inadvertent sharing of intellectual property—the list is formidable and growing.
How can companies harness the revolutionary power of generative AI—which, among other uses, can help automate customer service and high-volume tasks, provide useful summaries of proprietary or public data and research, and even write software code—without imperilling the trust of stakeholders? They can start by making the following moves:
These actions are the foundation of responsible AI, and they should become a fundamental part of your company’s AI playbook.
Jiří Moser
Country Managing Partner and CEE Advisory leader, PwC Czech Republic
Tel: +420 251 152 048
Azamat Konratbayev
Managing Partner, PwC Eurasia Assurance Leader, PwC Kazakhstan
Tel: +7 727 330 3200
Mekong Territory Senior Partner and CEO for PwC Thailand, PwC Thailand
Tel: +66 (0) 2844 1000
Abdulkhamid Muminov
Partner, Eurasia Tax, Legal and People Services Leader , PwC Uzbekistan
Tel: +998 78 120 61 01
Shirley Machaba
Regional Senior Partner, PwC South Market Area, PwC South Africa
Tel: +27 (0) 11 797 5851