Opening AI’s black box will become a priority

A risk from AI: we won’t understand why it does what it does. The lack of trust will hurt adoption—but solutions are coming. It's one of PwC's eight predictions for AI in 2018.

Might AI-powered autonomous weapons become serial killers? Could an AI system told to reduce air pollution decide that the most logical way to do so is to eliminate the human race? Such fears may make for good thrillers, but the danger is manageable.

Here’s the secret about AI that many of its proponents don’t like to mention: It’s not that smart—at least not yet. AI is getting better at pattern and image recognition, automating complex tasks, and helping humans make decisions. All that offers opportunities for enterprises that could be worth trillions of dollars.

In the past, for example, to teach an AI program chess or another game, scientists had to feed it data from as many past games as they could find. Now they simply provide the AI with the game’s rules. In a few hours it figures out on its own how to beat the world’s greatest grandmasters.

That’s extraordinary progress, with immense potential to support human decision making. Instead of playing chess, an AI program with the right rules can “play” at corporate strategy, consumer retention, or designing a new product.

But it’s still just following rules that humans have devised. With appropriate attention paid to responsible AI, we can safely harness its power.

A real risk

If AI should always be controllable, it’s not always understandable. Many AI algorithms are beyond human comprehension. And some AI vendors will not reveal how their programs work to protect intellectual property. In both cases, when AI produces a decision, its end users won’t know how it arrived there. Its functioning is a “black box.” We can’t see inside it.

That’s not always a problem. If an ecommerce website uses mysterious algorithms to suggest a new shirt to a consumer, the risks involved are low.

But what happens when AI-powered software turns down a mortgage application for reasons that the bank can’t explain? What if AI flags a certain category of individual at airport security with no apparent justification? How about when an AI trader, for mysterious reasons, makes a leveraged bet on the stock market?

Users may not trust AI if they can’t understand how it works. Leaders may not invest in AI if they can’t see evidence of how it made its decisions. So AI running on black boxes may meet a wave of distrust that limits its use.

Implications

Many black boxes will open

We expect organizations to face growing pressure from end users and regulators to deploy AI that is explainable, transparent, and provable. That may require vendors to share some secrets. It may also require users of deep learning and other advanced AI to deploy new techniques that can explain previously incomprehensible AI.

View more

Organizations face tradeoffs

Most AI can be made explainable—but at a cost. As with any other process, if every step must be documented and explained, the process becomes slower and may be more expensive. But opening black boxes will reduce certain risks and help establish stakeholder trust.

View more

Enterprises need a framework for AI explainability decisions

Explainability, transparency, and provability aren’t absolutes; they exist on a scale. A framework to assess business, performance, regulatory, and reputational concerns can enable optimal decisions about where each AI use case should fall on that scale. A healthcare firm using AI to help make life-or-death decisions has different needs than a private equity fund using AI to identify potential targets for further research.

View more

Contact us

Anand Rao

Global & US Artificial Intelligence and US Data & Analytics Leader, PwC US

Chris Curran

Chief Technologist, New Ventures, PwC US

Michael Baccala

US Assurance Innovation Leader, PwC US

Michael Shehab

US Tax Technology Process Leader, PwC US

Follow us