This blog is part of our ongoing SOC Insight series. Each piece focuses on a different area of SOC reporting and aims to answer the questions that are important to your business. Read more to learn why SOC reporting is about much more than checking a compliance box.
Many large organizations are already incorporating artificial intelligence (AI) into their business operations and processes. AI efforts can take many forms, from models that use basic data sets to generate output to machine-learning algorithms that provide recommendations and drive decisions. AI, for instance, can be used to complete tasks that require logic and decision-making skills, such as evaluating customer behavior patterns to detect fraud and make better decisions on customer credit limits, and can evolve over time.
Indeed, in the coming years AI will become commonplace in many—if not most—areas of business, and AI programs need governance, discipline and ongoing care and maintenance to perform well, reduce risk and drive expected outcomes. And the stakes are high. When organizations don’t properly account for the significant risks that AI can present, they can wind up on the front page of the news.
For example, a single error caused by an employee processing data can be caught and fixed relatively quickly. But if a model incorrectly incorporates new data and uses it as part of its AI, the resultant operating process can have significant and wide-reaching impacts—in some cases these impacts could go unnoticed until it’s too late.
Given the potential severity of the risks, companies must consider how to establish and monitor AI controls both within their own organizations and for faulty AI models that are obtained from third-party vendors. Leaders can start by getting a better handle on how these risks affect their business and reporting needs now, and then incorporating these risks in the scope of their SOC report.
Organizations have only just begun to think about the challenges AI can present to their business. As a result, their AI programs are likely not built with the proper controls to mitigate potential risks and address internal and external stakeholder security concerns. Standard IT processes are a good first step but they’re not enough. And since AI is always on, oversight and compliance needs to be too.
Organizations should consider seven key areas and ask themselves a set of questions as a starting point in thinking through AI-related risks.
Answering these questions is the first step to understanding the control environment around AI. Once those questions are answered, organizational leaders can take the next step toward providing transparency and building trust with stakeholders by engaging in conversations with those responsible for AI, understanding the controls, and evaluating the maturity of those controls.