PwC's Responsible AI

AI you can trust

AI is bringing limitless potential to push us forward as a society — but with great potential comes great risks. 

Responsible AI (RAI) is the only way to mitigate AI risks. Now is the time to evaluate your existing practices or create new ones to responsibly and ethically build technology and use data, and be prepared for future regulation. The payoff for early adopters is an edge that competitors may never be able to overtake.

When you use AI to support business-critical decisions based on sensitive data, you need to be sure that you understand what AI is doing, and why. Is it making accurate, bias-aware decisions? Is it violating anyone’s privacy? Can you govern and monitor this powerful technology? Globally, organisations recognise the need for Responsible AI but are at different stages of the journey.

Potential AI Risks

A variety of factors can impact AI risks, changing over time, stakeholders, sectors, use cases, and technology. Below are the six major risk categories for application of AI technology. 

Performance

AI algorithms that ingest real-world data and preferences as inputs may run a risk of learning and imitating possible biases and prejudices.

Performance risks include:

  • Risk of errors
  • Risk of bias and discrimination
  • Risk of opaqueness and lack of interpretability
  • Risk of performance instability

Security

For as long as automated systems have existed, humans have tried to circumvent them. This is no different with AI.

Security risks include:

  • Adversarial attacks 
  • Cyber intrusion and privacy risks
  • Open source software risks

Control

Similar to any other technology, AI should have organisation-wide oversight with clearly-identified risks and controls.

Control risks include:

  • Lack of human agency
  • Detecting rogue AI and unintended consequences
  • Lack of clear accountability

Economic

The widespread adoption of automation across all areas of the economy may impact jobs and shift demand to different skills.

Economic risks include:

  • Risk of job displacement
  • Enhancing inequality
  • Risk of power concentration within one or a few companies

Societal

The widespread adoption of complex and autonomous AI systems could result in “echo-chambers” developing between machines, and can have broader impacts on human-human interaction.

Societal risks include:

  • Risk of misinformation and manipulation
  • Risk of an intelligence divide
  • Risk of surveillance and warfare

Enterprise

AI solutions are designed with specific objectives in mind which may compete with overarching organisational and societal values within which they operate. Communities often have long informally agreed to a core set of values for society to operate against. There is a movement to identify sets of values and thereby the ethics to help drive AI systems, but there remains disagreement about what those ethics may mean in practice and how they should be governed. Thus, the above risk categories are also inherently ethical risks as well. 

Enterprise risks include:

  • Risk to reputation
  • Risk to financial performance
  • Legal and compliance risks
  • Risk of discrimination
  • Risk of values misalignment

Our recommendations: Overcoming risks with Responsible AI

As organisations start to adopt AI, they need to be aware of certain barriers that may complicate technology implementation. Being aware of emerging AI regulation to govern the use of AI is only one part of the equation in mitigating risks. They will also need to look inwards and challenge any siloes in their approach to AI and data governance, and assess if their workforce has the necessary skills critical to AI adoption. 

Here are 3 steps organisations can take to build greater trust in AI.

Responsible AI
Take a multi-disciplinary approach to governance

To govern the use of AI, ensure that all stakeholders are involved. This means the team tasked with overseeing governance should comprise representatives from various areas of the business, including leadership, procurement, compliance, human resources, technology and data experts, and process owners from different functions.

If there is an existing governance structure in place, you may extend it by adopting a three lines of defence risk management model.

Responsible AI
Build up your AI risk confidence

Ensure that you have the right AI policies, standards, controls, tests and monitoring for all risk aspects of AI. 

Having a common AI playbook may serve as a ‘how to’ guide for approaching new AI initiatives to build trust in this technology. It may be helpful to guide how you collaborate and discuss risks based on your goals, while identifying the level of rigour required to address risks based on their severity.

Responsible AI
Act to maintain performance

Keep the momentum going as you familiarise yourself with AI and learn how to manage the risks. Observing good governance and risk management may not necessarily slow you down in this regard. The right level of explainability, for example, will depend on each AI model’s level of risk and required accuracy levels, allowing for quicker progress in some areas than others.

PwC’s Responsible AI Toolkit

Your stakeholders, including board members, customers, and regulators, will have many questions about your organisation's use of AI and data, from how it’s developed to how it’s governed. You not only need to be ready to provide the answers, you must also demonstrate ongoing governance and regulatory compliance.

Our Responsible AI Toolkit is a suite of customisable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.

An overview of PwC’s Responsible AI Framework

Required fields are marked with an asterisk(*)

By submitting your email address, you acknowledge that you have read the Privacy Statement and that you consent to our processing data in accordance with the Privacy Statement (including international transfers). If you change your mind at any time about wishing to receive the information from us, you can send us an email message using the Contact Us page.

Contact us

Elaine Ng

Elaine Ng

Partner, Financial Services and Risk Services Leader, PwC Malaysia

Tel: +60 (12) 334 6243

Marina Che Mokhtar

Marina Che Mokhtar

Deals Partner, Economics and Policy, PwC Malaysia

Tel: +60 (3) 2173 1699

Khai Chiat Ong

Khai Chiat Ong

Partner, Risk Services, PwC Malaysia

Tel: +60 (3) 2173 0358

Clarence Chan

Clarence Chan

Partner, Digital Trust and Cybersecurity Leader, PwC Malaysia

Tel: +60 (3) 2173 0344

Nataraj Veeramani

Nataraj Veeramani

Director, Assurance, PwC Malaysia

Tel: +60 (3) 2173 0897

Hide