Pressure for responsible AI won’t be on tech companies alone

Invasion of privacy, algorithmic bias, environmental damage, threats to brands and the bottom line — the fears around AI are numerous. How to innovate responsibly. That's one of eight PwC predictions for AI in 2018. 

New technologies often bring new fears, justified or not, and not just among conspiracy theorists. Seventy-seven percent of CEOs in a 2017 PwC survey said AI and automation will increase vulnerability and disruption to the way they do business. Odds are good that if we asked government officials, the response would be similar.

Leaders will soon have to answer tough questions about AI. It may be community groups and voters worried about bias. It may be clients fearful about reliability. Or it may be boards of directors concerned about risk management, ROI, and the brand.

In all cases, stakeholders will want to know that organizations are using AI responsibly, so that it strengthens the business and society as a whole.

The result, we believe, will be pressure to adopt principles for responsible AI.

A global movement begins

We’re not alone in this belief. The World Economic Forum’s Center for the Fourth Industrial Revolution, the IEEEAI NowThe Partnership on AIFuture of LifeAI for Good, and DeepMind, among other groups, have all released sets of principles that look at the big picture: how to maximize AI’s benefits for humanity and limit its risks.

Some areas of relative consensus (which we fully support) among these institutions include:

  • designing AI with an eye to societal impact
  • testing AI extensively before release
  • using AI transparently
  • monitoring AI rigorously after release
  • fostering workforce training and retraining
  • protecting data privacy
  • defining standards for the provenance, use, and securing of data sets
  • establishing tools and standards for auditing algorithms

With any new technology (and many old ones too), the golden rule we follow is to do more than compliance requirements demand. Regulators and laws often lag innovation. Organizations that don’t wait for policymakers to issue orders, but instead voluntarily use new technology responsibly, will reduce risks, improve ROI, and strengthen their brands.

Implications

New structures for responsible AI

As organizations face pressure to design, build, and deploy AI systems that deserve trust and inspire it, many will establish teams and processes to look for bias in data and models and closely monitor ways malicious actors could “trick” algorithms. Governance boards for AI may also be appropriate for many enterprises.

View more

Public-private partnerships and public-citizen partnerships

One of the best ways to use AI responsibly is for public and private sector institutions to collaborate, especially when it comes to AI’s societal impact. Likewise, as more governments explore the use of AI to distribute services efficiently, they’re engaging citizens in the process. In the UK, for example, the RSA (Royal Society for the encouragement of Arts, Manufactures and Commerce) is conducting a series of citizen juries on the use of AI and ethics in criminal justice and democratic debate.

View more

Self-regulatory organizations to facilitate responsible innovation

Since regulators may scramble to keep up, and self-regulation has its limits, self-regulatory organizations (SROs) may take the lead with responsible AI. An SRO would bring users of AI together around certain principles, then oversee and regulate compliance, levy fines as needed, and refer violations to regulators. It’s a model that has worked in other industries. It may well do the same for AI and other technologies.

View more

Contact us

Anand Rao

Global & US Artificial Intelligence and US Data & Analytics Leader, PwC US

Chris Curran

Chief Technologist, New Ventures, PwC US

Michael Baccala

US Assurance Innovation Leader, PwC US

Michael Shehab

US Tax Technology Process Leader, PwC US

Follow us