Responsible AI: Now that AI is part of your business, what does it mean for audits?

te pattern chevrons rose mobile
te pattern chevrons rose

Summary

  • As AI adoption accelerates across business functions, many companies are still navigating how to align usage with evolving audit expectations.
  • Auditors may assess how AI is governed, documented and controlled — especially when it’s used in SOX-relevant processes or financial reporting.
  • Building clear visibility into AI use, supported by strong governance and risk classification, helps your organization manage complexity and stay audit-ready.
  • Demonstrating the reliability of AI outputs through validation, monitoring and documentation is essential for engaging confidently with auditors and stakeholders.

This is the ninth in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.

AI is reshaping how business gets done. Including yours.

From finance and internal audit to IT and R&D, you’re already using AI to do more, with new levels of speed and insight. And you’re not alone. Your customers, your service providers, the teams who evaluate your controls and compliance — they’re all evolving, too.

Which raises the question: What does the growing use of AI mean for assurance?

CFOs may be asking what changes it brings to internal control over financial reporting (ICFR). Audit committees may be wondering whether their oversight is keeping pace. Internal audit teams may want to understand what new data, documentation or evidence auditors could require when assessing AI-enabled processes.

It’s a lot to get a handle on and complicated by the fact that there’s a lack of clear standards. Yet one thing’s certain: If AI is part of your process, it becomes part of the audit.

What auditors may ask for and why it matters

Preparing for audit scrutiny doesn’t have to mean slowing down your AI efforts. But it does require a sustained focus on Responsible AI practices. Whether AI is supporting decision-making, customer interactions or internal operations, your organization should be prepared to explain how it’s being used — and show how it’s governed, documented and controlled. These are critical elements in building trust with auditors and stakeholders.

Auditors may take a top-down approach, reviewing policies, procedures and frameworks that articulate how your organization manages AI risk and oversees new systems and capabilities. This can include how emerging AI risks are addressed, how risk appetite is defined and how exceptions are escalated. Roles and responsibilities tied to AI accountability may also come under review. These areas are typically evaluated and documented through a risk and control framework or equivalent methodology. You may also be asked to provide documentation as evidence that key controls have operated effectively.

If your framework requires a model risk assessment before deployment, for instance, auditors may ask not only whether the requirement exists but whether it was followed. That could include reviewer comments on model documentation, evidence of issue escalation or artifacts from explainability and bias reviews. AI governance often introduces review activities that sit outside traditional control frameworks, but if those activities are part of your program, auditors may look for evidence demonstrating how the framework is being applied in practice.

How to get audit-ready

Regardless of where your organization is on its AI implementation, here’s how you can begin building audit-ready AI practices.

Establish strong governance. Governance should cover how decisions are made about which models or tools to use, how those tools are monitored over time and how explainability and data lineage are addressed — particularly when outputs feed into financial reporting or external disclosures. Internal audit can play a key role here in evaluating emerging risks, validating control design and assessing whether governance practices are operating as intended.

AI risk should be integrated into your existing risk management capabilities. This includes enterprise risk taxonomies and risk and control self-assessments (RCSAs). These elements help demonstrate that your organization understands the unique risks AI presents and has the processes, tools and accountability in place to assess, monitor and mitigate those risks effectively.

Upskill and align your teams. Audit readiness depends on your team’s ability to explain how AI is used and governed and why outcomes are reliable and accurate. Process owners and control operators should have the training and context to confidently engage in audit conversations so they can clearly explain what AI is doing, how it’s being monitored and why it can be trusted.

Build an inventory of AI use cases. Your organization should be ready to clearly identify where AI is being used across the business. An effective way to do this is by maintaining a complete and accurate inventory, one that connects use cases to core business processes and flags those relevant to Sarbanes-Oxley (SOX) or other regulatory domains. The key is that your organization can demonstrate a clear understanding of AI usage across processes along with the assessed risk level for each.

Building this inventory shouldn’t rely solely on self-reporting. You should also establish a process to identify potential shadow AI or overlooked areas such as embedded functionality in third-party tools, unsanctioned deployments or AI-enabled features introduced through system updates. This helps confirm that relevant use cases, including those outside formal programs, are accounted for.

While a centralized inventory is considered a leading practice, well-documented processes that clearly reflect where and how AI is being used may, in some cases, provide sufficient visibility — particularly as part of management’s responsibilities under SOX. Regardless of the approach, your organization should be able to demonstrate clear visibility into AI use, including where it’s embedded or supported by third-party systems. This visibility is essential to assessing risk and preparing for audit.

Apply a risk-based approach. Not all AI is equal. Your approach to governance — including policies, procedures and required controls — should reflect the differences in how AI is used across your organization. Consider adopting a classification approach to help standardize how AI use cases are assessed. A structured classification and metadata strategy can make ongoing governance more manageable and helps assurance activities can keep pace with innovation.

A use case for document summarization, for example, probably presents lower risk than more complex applications like autonomous decision-making by AI agents drafting financial reports or regulatory filings. A clear classification approach can help you more quickly understand where more robust governance testing or validation may be required.

To support this approach, you should apply the right metadata to each use case, such as business function, regulatory context, SOX relevance and data sensitivity. Not only is this important for audit readiness, it also becomes critical as you scale AI across the organization — with hundreds or even thousands of use cases.

Validate AI outputs. For each use case, define how your organization confirms the reliability of AI-generated outputs. This includes identifying any additional reviews required to address risks uncovered in model design. Output validation is especially critical when AI is used in financial reporting. When your organization can demonstrate confidence in AI outcomes, discussions with external stakeholders, including auditors, tend to go more smoothly.

Validating AI outputs for audit purposes means being able to produce evidence that shows the results are reliable, reviewed and appropriately governed. Here are a few examples of exhibits that may help demonstrate audit readiness.

  • Model risk assessments or validation sign-offs
  • Reviewer annotations or approval workflows
  • Exception logs and issue escalation documentation
  • Output samples with evidence of human review
  • Monitoring reports (e.g., drift detection, alert thresholds)
  • Documentation of fallback controls or override decisions
  • Updated SOX documentation reflecting AI-enabled processes

Clearly define and document review steps and results, fallback mechanisms and expected performance thresholds. Because many AI systems produce probabilistic — rather than deterministic — outcomes (meaning they generate results based on likelihoods, not fixed rules), auditors will likely want to understand how your organization addresses that risk. That includes how requirements for testing and review are defined and applied when AI augments or replaces process or control steps.

When generative AI is used to create narrative disclosures or summarize financial performance, defining a fixed threshold for accuracy may not be feasible. Instead, auditors may look for evidence of controls over how prompts are constructed, how outputs are reviewed and approved, and how hallucination risks are identified and monitored.

Preparing for SOX compliance

Being ready for audit scrutiny is especially crucial if yours is a public company subject to Sarbanes-Oxley requirements. How confident are you in your ability to do the following?

  • Clearly articulate where and how AI is used in financial reporting. Auditors may ask this as part of planning and throughout the audit. AI used in journal entry automation, estimates, close processes or other SOX-relevant areas should be clearly flagged in your inventory and considered in your own SOX program.
  • Demonstrate how AI outcomes are validated. Show how management gains confidence in AI-generated results that feed into financial statements and regulatory outputs, including data integrity checks, review steps and exception handling.
  • Test the design and effectiveness of controls involving AI. When AI tools are part of a control, document how that control is designed, how it consistently operates and how it’s tested by management. You should also be prepared to show how the AI system or tool was developed, deployed and tested. You may also need to update risk-control matrices (RCMs) to reflect AI involvement in controls, including whether outputs are generated, reviewed or supported by AI.
  • Prepare SOX documentation. Confirm RCMs, narratives and flowcharts reflect AI usage where applicable. Overlooking this step can lead to misalignment between your team and your auditors during the audit process.
  • Equip process owners. Those responsible for key financial reporting processes should be able to explain how AI is used, what could go wrong and how the risk is mitigated through updated control practices.

From readiness to confidence

No matter how AI is deployed — automation, forecasting or embedded in third-party tools — you should be prepared to explain usage clearly and confidently during audits. Readiness means being able to show governance, documentation and controls that inspire trust from auditors and stakeholders alike.

Audit readiness isn’t a one-time exercise but an ongoing commitment. As AI capabilities evolve, so will expectations around governance and assurance. Building strong governance, inventories and validation practices today means that when auditors ask the hard questions, your organization can answer with confidence — and with a clear story about how AI is used responsibly and effectively across the enterprise.

Trust to the power of Responsible AI

Embrace AI-driven transformation while managing the risk, from strategy through execution.

Jennifer Kosar

AI Assurance Leader, PwC US

Email

Rohan Sen

Principal, Data Risk and Responsible AI, PwC US

Email

Katee Puterbaugh

Director, Cyber, Risk and Regulatory, PwC US

Email

Jeff Sorensen

Partner, Cyber, Risk and Regulatory, PwC US

Email

Next and previous component will go here

Follow us