
Trust to the power of Responsible AI
Embrace AI-driven transformation while managing the risk, from strategy through execution.
Learn moreThis is the second in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.
As AI rapidly becomes integral to their operations, many organizations remain focused on their own efforts — possibly overlooking how extensively their vendors, subcontractors and service providers are using the technology. Third parties are using AI across cloud platforms, SaaS tools and outsourced services to improve performance, automate decisions and provide added value.
Traditional tools for managing vendors weren’t built to address the challenges that AI can raise, such as questions about model training, bias mitigation or data lineage controls. Without updated controls and AI-specific visibility into these vendors, enterprises risk falling out of step with emerging regulations and stakeholder expectations.
Third-party risk management (TPRM) functions now face a dual challenge: keeping up with the pace of AI growth across the vendor landscape while managing the integrity, security and compliance of these third-party relationships.
The stakes are high. AI used by third parties can include sensitive data, automate decisions that have wide-ranging impacts and introduce dependencies that may be difficult to audit or govern. But the possible benefits are also great, as third parties use AI to help bring new capabilities and efficiencies to the organization.
To manage this new class of risk, organizations should go beyond checkbox diligence. This means rethinking their oversight strategies, updating vendor contracts and integrating AI-oriented controls into their risk frameworks. Done well, TPRM functions can play a critical role in enabling Responsible AI adoption among vendors, helping the business adopt new technologies and foster innovation while mitigating risk exposure.
This is the second in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.
As AI rapidly becomes integral to their operations, many organizations remain focused on their own efforts — possibly overlooking how extensively their vendors, subcontractors and service providers are using the technology. Third parties are using AI across cloud platforms, SaaS tools and outsourced services to improve performance, automate decisions and provide added value.
Traditional tools for managing vendors weren’t built to address the challenges that AI can raise, such as questions about model training, bias mitigation or data lineage controls. Without updated controls and AI-specific visibility into these vendors, enterprises risk falling out of step with emerging regulations and stakeholder expectations.
Third-party risk management (TPRM) functions now face a dual challenge: keeping up with the pace of AI growth across the vendor landscape while managing the integrity, security and compliance of these third-party relationships.
The stakes are high. AI used by third parties can include sensitive data, automate decisions that have wide-ranging impacts and introduce dependencies that may be difficult to audit or govern. But the possible benefits are also great, as third parties use AI to help bring new capabilities and efficiencies to the organization.
To manage this new class of risk, organizations should go beyond checkbox diligence. This means rethinking their oversight strategies, updating vendor contracts and integrating AI-oriented controls into their risk frameworks. Done well, TPRM functions can play a critical role in enabling Responsible AI adoption among vendors, helping the business adopt new technologies and foster innovation while mitigating risk exposure.
The integration of AI into third-party service delivery is prompting organizations to fundamentally rethink their TPRM practices.
Many third-party vendors, such as providers of off-the-shelf software, have begun to embed AI into their products, often without full visibility or the understanding of their customers. Service providers may also leverage AI to enhance their service delivery, again without clients’ explicit awareness. Gaining visibility into when and how these third parties are using AI is a growing challenge for enterprises.
To address this, some organizations use tools that analyze DNS traffic and web data to flag potential GenAI use — for example, by identifying vendors linked to “.ai” domains or known AI providers. However, many enterprises rely on manual outreach, which can create friction and delays in the onboarding process for new vendors.
These vendors’ use of AI models may impact core service delivery, decision-making, and even data governance. Yet existing oversight mechanisms, such as SOC 2 (System and Organization Controls) reports or generalized risk questionnaires, often lack the specificity needed for the organization to assess how the vendor is using AI, what data it relies on and whether adequate controls exist.
To keep pace, businesses should rethink how they identify, evaluate and monitor third-party AI use. This includes incorporating AI-specific controls into risk models and enhancing due diligence processes. It can also require revisiting contractual obligations to confirm that vendors disclose AI deployments, that they are providing adequate governance and that their AI use is aligned with the enterprise’s risk profile.
By proactively managing AI-related risks across their third-party landscape, enterprises can streamline risk assessment, reduce time-to-contract, and onboard vendors faster without compromising governance.
Another way to help unlock value is to standardize the organization on preferred, pre-vetted providers whose AI practices align with the organization’s Responsible AI standards. While total standardization may not be realistic, identifying and prevetting the likely small group of vendors that represent the majority of the third-party usage within the organization can be a good investment. It can allow companies to reduce the burden of assessing multiple AI solutions and focus on integrating with strategic vendors. It can also support economies of scale in training and tooling — for example, by aligning TPRM platforms with source-to-pay (S2P), contract life cycle management or vendor intelligence systems.
With strong AI governance in place, TPRM teams can move faster and with more confidence.
When TPRM shifts from being a reactive gatekeeper to a proactive enabler, organizations are likely to be better able to adopt AI-driven solutions responsibly and at scale.
Here’s how you can get started to help effectively manage the evolving risks and opportunities of third-party AI use:
Revisit vendor contracts to encourage Responsible AI use. Update agreements to require disclosure when vendors use AI in service delivery. Include provisions for notification and risk transparency. When it’s appropriate, create incentives for vendors to innovate responsibly with AI.
Scrutinize data usage policies. Confirm whether third parties are using your organization’s data to train AI models. Require clear documentation of data-handling practices, consent mechanisms and any limitations placed on data reuse.
Perform AI-specific due diligence and ongoing monitoring. Push vendors to provide greater transparency and evidence of holistic controls on model development, data privacy, bias mitigation and auditability. To support these efforts, consider adding AI-focused addenda to SOC 2 reports, independent attestations, or other governance tools.
Enhance third-party risk-tiering frameworks. Modify risk scoring to account for AI use cases. Prioritize due diligence based on the type of AI deployed, the sensitivity of the data being used and the potential business impact of AI failures, outages or misuse.
Increase AI-focused inquiries during assessments. Ask targeted questions about AI model design, data sources used in training, risk controls, explainability and monitoring processes.
Track and respond to evolving regulations. Stay ahead of emerging AI governance mandates, such as the EU AI Act, and confirm that your third parties’ practices align with the relevant regional and sector-specific requirements.
As the adoption of AI accelerates, organizations face mounting pressure to make sure that their third-party ecosystems are not only secure but also aligned with evolving standards. PwC helps organizations turn TPRM functions into proactive engines of trust and innovation by using advanced risk modeling, AI-specific controls and strategic vendor oversight. Get started today to responsibly and transparently leverage AI at scale.