Skip to content Skip to footer

Artificial intelligence: Does your organization know what the risks are?

Example pattern for mobile
Example pattern for desktop

Summary

  • Artificial intelligence is becoming woven into the fabric of business.
  • But leaders may not fully understand the risks of AI that’s designed poorly or doesn’t work as intended.
  • Implementing proper controls and standardizing processes are a good start.
  • These seven questions can help you more confidently manage AI risk.

Many large organizations are already incorporating artificial intelligence (AI) into their business operations and processes. AI efforts can take many forms, from models that use basic data sets to generate output to machine-learning algorithms that provide recommendations and drive decisions. AI, for instance, can be used to complete tasks that require logic and decision-making skills, such as evaluating customer behavior patterns to detect fraud and make better decisions on customer credit limits, and can evolve over time.

Indeed, in the coming years AI will become commonplace in many — if not most — areas of business, and AI programs need governance, discipline and ongoing care and maintenance to perform well, reduce risk and drive expected outcomes. And the stakes are high. When organizations don’t properly account for the significant risks that AI can present, they can wind up on the front page of the news.

For example, a single error caused by an employee processing data can be caught and fixed relatively quickly. But if a model incorrectly incorporates new data and uses it as part of its AI, the resultant operating process can have significant and wide-reaching impacts — and in some cases these impacts could go unnoticed until it’s too late.

Given the potential severity of the risks, companies must consider how to establish and monitor AI controls both within their own organizations and for faulty AI models that are obtained from third-party vendors. Leaders can start by getting a better handle on how these risks affect their business and reporting needs now, and then incorporating these risks in the scope of their SOC report.

Managing AI risk with confidence

Organizations have only just begun to think about the challenges AI can present to their business. As a result, their AI programs are likely not built with the proper controls to mitigate potential risks and address internal and external stakeholder security concerns. Standard IT processes are a good first step, but they’re not enough. And since AI is always on, oversight and compliance needs to be too.

Organizations should consider seven key areas and ask themselves a set of questions as a starting point in thinking through AI-related risks.


Data

AI is nothing without good data. Do you have the right governance mechanisms over inputs and controls around data set selection?


Models and algorithms

With what degree of difficulty would you be able to explain your AI effort to stakeholders? Is there well-defined oversight on how AI models are developed? Is the process transparent? Has bias been accounted for?


Outputs and decisions

Are AI model outputs continually reviewed to ensure accuracy and alignment with the model’s initial business purpose?


Governance, oversight and monitoring

What oversight bodies are involved in its management?  What monitoring efforts are in place? Have you considered continuous compliance?


Machine learning

As models learn and improve, does management have assurance on machine-learning processes, including the data utilized?


Business impacts and reporting

Data models affect financial reporting, operational decisions and customer touchpoints. Are you ready for how this will change current internal processes and systems?


Information technology general controls

Do the technology assets supporting your AI programs have the proper controls around logical security, program change, computer operations and program development?

Answering these questions is the first step to understanding the control environment around AI. Once those questions are answered, organizational leaders can take the next step toward providing transparency and building trust with stakeholders by engaging in conversations with those responsible for AI, understanding the controls, and evaluating the maturity of those controls.

Identify potential threats to better predict risk

Identify potential threats to better predict risk

Risk, control and security transformation solutions

Learn more

Todd Bialick

Digital Assurance and Transparency Leader, PwC US

Email

Next and previous component will go here

Follow us