PwC's Responsible AI

Artificial intelligence you can trust

AI is here to stay—bringing limitless potential to push us forward as a society. Used wisely, it can create huge benefits for businesses, governments and individuals worldwide.

How big is the opportunity? Our research estimates that AI could contribute $15.7 trillion to the global economy by 2030, as a result of productivity gains and increased consumer demand driven by AI-enhanced products and services. AI solutions are diffusing across industries and impacting everything from customer service and sales to back office automation. AI’s transformative potential continues to be top of mind for business leaders: our CEO survey finds that 85% of CEOs believe that AI will significantly change the way they do business in the next five years.

With great potential comes great risk. Are your algorithms making decisions that align with your values? Do customers trust you with their data? How is your brand affected if you can’t explain how AI systems work? It’s critical to anticipate problems and future-proof your systems so that you can fully realize AI’s potential. It’s a responsibility that falls to all of us—board members, CEOs, business unit heads and AI specialists alike.

Organizations globally are recognizing the need for Responsible AI

64%

Boost AI security with validation, monitoring and verification

61%

Create transparent, explainable and provable AI models

55%

Create systems that are ethical, understandable and legal

52%

Improve governance with AI operating models and processes

47%

Test for bias in data, models and human use of algorithms

3%

Have no plans to address those AI issues

Source: PwC US - 2019 AI Predictions
Base: 1,001
Q: What steps will your organization take in 2019 to develop AI systems that are responsible, that is trustworthy, fair and stable?

AI risks

Performance

AI algorithms that ingest real-world data and preferences as inputs run a risk of learning and imitating human biases and prejudices.

Performance risks include:

  • Risk of errors
  • Risk of bias
  • Risk of opaqueness
  • Risk of instability of performance
  • Lack of feedback process

View more

Security

For as long as automated systems have existed, humans have tried to circumvent them. This is no different with AI.

Security risks include:

  • Cyber intrusion risks
  • Privacy risks
  • Open source software risks
  • Adversarial attacks

View more

Control

Similar to any other technology, AI should have organization-wide oversight with clearly-identified risks and controls.

Control risks include:

  • Risk of AI going “rogue”
  • Inability to control malevolent AI

View more

Economic

The widespread adoption of automation across all areas of the economy may impact jobs and shift demand to different skills.

Economic risks include:

  • Risk of job displacement
  • Risk of concentration of power within one company or within a few companies
  • Liability risk

View more

Societal

The widespread adoption of complex and autonomous AI systems could result in “echo-chambers” developing between machines, and have broader impacts on human-human interaction.

Societal risks include:

  • Risk of autonomous weapons proliferation
  • Risk of an intelligence divide

View more

Ethical

AI solutions are designed with specific objectives in mind which may compete with overarching organizational and societal values within which they operate.

Ethical risks include:

  • Values misalignment risk

View more

PwC’s Responsible AI Toolkit

Your stakeholders, including board members, customers and regulators, will have many questions about your organization's use of AI and data, from how it’s developed to how it’s governed. You not only need to be ready to provide the answers, you must also demonstrate ongoing governance and regulatory compliance.

Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner—from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organization’s unique business requirements and AI maturity.

Our Responsible AI Toolkit addresses the five dimensions of responsible AI

Governance

Who is accountable for your AI system?

The foundation for responsible AI is an end-to-end enterprise governance framework. This focuses on the risks and controls along your organization’s AI journey, from top to bottom.

Interpretability and explainability

How was that decision made?

An AI system that human users are unable to understand can lead to a “black box” effect, in which organizations are limited in their ability to explain and defend business-critical decisions. Our Responsible AI approach can help. We provide services to help you explain both overall decision-making and also individual choices and predictions, tailored to the perspectives of different stakeholders.

Bias and fairness

Is your AI unbiased? Is it fair?

An AI system that is exposed to inherent biases of a particular data source is at risk of making decisions that could lead to unfair outcomes for a particular individual or group. Fairness is a social construct with many different and—at times—conflicting definitions. Responsible AI helps your organization to become more aware of bias, and take corrective action to help systems improve in their decision-making.

Robustness and security

Will your AI behave as intended?

An AI system that does not demonstrate stability, and that cannot consistently meet performance requirements, is at increased risk of producing errors and making the wrong decisions. To help make your systems more robust, Responsible AI includes services to help you identify weaknesses in models, assess system safety and monitor long-term performance.

Ethics and regulation

Is your AI legal and ethical?

Our Ethical AI Framework provides guidance and a practical approach to help your organization with the development and governance of AI solutions that are ethical and moral. 

As part of this dimension, our framework includes a unique approach to contextualising ethical considerations for each bespoke AI solution, identifying and addressing ethical risks and applying ethical principles.

Are you ready for AI?

Find out by taking our free Responsible AI Diagnostic—which asks key questions including:

  • How concerned are you about the possible ethical implications of the use of AI in your organization?
  • What measures do you have in place to make sure that any risks associated with AI are evaluated fully?
  • How confident are you in your organization’s ability to deploy secure and reliable AI at scale?

Innovate responsibly

Whether you're just getting started or are getting ready to scale, Responsible AI can help. Drawing on our proven capability in AI innovation and deep global business expertise, we'll assess your end-to-end needs and design a solution to help you address your unique risks and challenges.

Contact us

Contact us today. Learn more about how to become an industry leader in the responsible use of AI.

Let's connect

Anand Rao

Anand Rao

Global Leader of Artificial Intelligence, PwC US

Contact