Responsible AI and internal audit: what you need to know

24/07/25

Summary

  • Internal audit is becoming a key player in AI strategy, as independent assurance is essential to managing risk and building trust.
  • Embedding audit into the AI lifecycle helps to enable responsible use, helping avoid governance gaps and reputational harm.
  • Responsible AI oversight enhances stakeholder confidence and strengthens the organization's governance reputation.

This is the sixth in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.

AI isn’t coming — it’s likely already here and it’s embedded across your enterprise. From automated underwriting and generative customer support to AI-driven financial forecasts, business leaders are racing ahead. But without the right controls, this acceleration may lead to unmitigated risk, underscoring the need for intentional and adaptive control frameworks.

Internal audit has a critical choice: Lead the charge on AI governance or scramble to catch up in the aftermath of a model failure, compliance breach or public misstep.

With a mandate for independent assurance and enterprise-wide visibility, your internal audit team is uniquely positioned to evaluate governance structures, advise on responsible AI practices and deployment, and evaluate control design and effectiveness.

By taking a leadership role in evaluating AI governance, internal audit not only helps the organization reduce risk and build trust — it also elevates its role as a strategic advisor in shaping responsible innovation.

How AI is rapidly changing the status quo in internal audit

Just as cybersecurity emerged as a top audit concern a decade ago, AI risk is now top of mind in audit committees and leadership discussion.

And no wonder. AI is reshaping core business processes, often faster than governance models can keep up. Organizations without clear strategies or inventories of AI use cases are especially vulnerable. Left unchecked, AI systems can:

  • Make decisions that violate privacy, bias or fairness expectations
  • Evade explainability, complicating audit and compliance
  • Introduce security and IP risks through third-party models or data leakage

Boards, regulators and customers want answers, answers that internal audit can provide — if equipped with the right mandate, capabilities and frameworks.

Opportunities for internal audit to create value with Responsible AI

Beyond risk mitigation, internal audit has a unique opportunity to drive value and enable transformation with responsible use of AI by shaping governance frameworks that drive innovation.

AI governance acts like the brakes on a race car: Just as you can drive faster knowing you’ve got the ability to stop, organizations can innovate more rapidly knowing that guardrails are in place and that AI use cases will receive appropriate governance.

By getting involved in governance early, internal audit teams can evaluate initial frameworks, assess emerging risks and build trust in how AI is developed and used. They can offer guidance on key issues like transparency, privacy, output quality and accountability. Their perspective can help confirm that AI systems are built with strong safeguards from the start, and that responsible use stays a priority as adoption grows.

Key actions to prioritize

To see where internal audit can make a significant impact on Responsible AI, here are areas to focus on.

  • Establish line of sight across the AI landscape. Teams should advocate for — and assess the completeness of — a living inventory of AI systems, models and embedded tools, whether internal or third-party. This inventory isn’t just for documentation; it forms the foundation for assessing vendor concentration risk and regulatory compliance, identifying high-impact use cases and planning future audits. Teams can collaborate with IT, data governance and procurement to confirm that AI assets are appropriately tagged and tracked across systems, applications and vendor platforms.
  • Audit AI governance structures. AI governance is often fragmented across functions. Internal audit can bring clarity by evaluating whether governance roles, escalation paths and decision rights are clearly defined and functioning as intended. Review organizational governance models to confirm oversight responsibilities for AI are documented, resourced and operational.
  • Assess the adequacy of AI risk and control frameworks. AI introduces new types of risk — from data drift and bias to explainability and misuse — that may not be addressed by standard control matrices. Audit teams should evaluate whether the organization has adapted recognized frameworks (such as NIST AI RMF, ISO 42001, or COSO) to address these risks effectively. For example, the NIST AI Risk Management Framework (AI RMF) is a flexible, voluntary guide, while ISO 42001 is a certifiable standard with formal requirements. Reviews should focus on the design and operational effectiveness of controls for high-impact models, and whether key elements — like training data, validation methods and decision logic — are appropriately governed and documented.
  • Embed internal audit into Responsible AI design. Waiting until post-deployment to assess AI is no longer viable. Internal audit should be part of the design and deployment life cycle — especially for high-risk or customer-facing models. Establish a pre-launch review protocol for high-risk models focused on transparency, intended use and fallback controls.
  • Equip your teams with AI. Explore how AI can enhance internal audit’s own effectiveness. Piloting AI tools — such as those for control testing, document summarization or anomaly detection — to streamline audit execution, expand coverage and unlock predictive insights.

Getting started

As AI becomes more deeply embedded in business operations, internal audit functions should evolve to provide timely, tech-enabled assurance.

PwC helps organizations modernize their internal audit capabilities to address AI-related risks, strengthen governance and deliver greater strategic value. Get started today and empower your teams to lead confidently in a rapidly changing digital environment.

Trust to the power of Responsible AI

Embrace AI-driven transformation while managing the risk, from strategy through execution.

Jennifer Kosar

AI Assurance Leader, PwC United States

Email

Rohan Sen

Principal, Data Risk and Responsible AI, PwC United States

Email

Katee Puterbaugh

Director, Cyber, Risk and Regulatory, PwC United States

Email

Jeff Sorensen

Partner, Cyber, Risk and Regulatory, PwC United States

Email

Next and previous component will go here

Follow us