Many black boxes will open
We expect organizations to face growing pressure from end users and regulators to deploy AI that is explainable, transparent, and provable. That may require vendors to share some secrets. It may also require users of deep learning and other advanced AI to deploy new techniques that can explain previously incomprehensible AI.
Organizations face tradeoffs
Most AI can be made explainable—but at a cost. As with any other process, if every step must be documented and explained, the process becomes slower and may be more expensive. But opening black boxes will reduce certain risks and help establish stakeholder trust.
Enterprises need a framework for AI explainability decisions
Explainability, transparency, and provability aren’t absolutes; they exist on a scale. A framework to assess business, performance, regulatory, and reputational concerns can enable optimal decisions about where each AI use case should fall on that scale. A healthcare firm using AI to help make life-or-death decisions has different needs than a private equity fund using AI to identify potential targets for further research.