te pattern chevrons rose mobile
te pattern chevrons rose

Summary

  • AI adoption is accelerating, with AI agents becoming core to business operations — but they bring new cyber risks.
  • Responsible AI practices can empower cyber teams to scale security and support business agility.
  • Cyber teams are key to Responsible AI success, embedding risk control into innovation lifecycles.

This is the first in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.

AI was just a buzzword yesterday, but it has become the norm today. We've moved past simply talking about AI — the conversation has shifted to AI agents as the next frontier. Innovation in this space is accelerating, and enterprises are quickly identifying practical ways to integrate AI agents into their operations.

At the same time, there is growing concern about the new cyber threats and risks that AI agents can introduce. Standards bodies and regulators are actively working to define guidance and develop regulations to address these emerging challenges.

This lies at the heart of Responsible AI: how can organizations embrace the rapid adoption of AI and AI agents, without letting that innovation come at the cost of security and trust?

Responsible AI and cybersecurity: what you need to know

Share
te pattern chevrons rose mobile
te pattern chevrons rose

Summary

  • AI adoption is accelerating, with AI agents becoming core to business operations — but they bring new cyber risks.
  • Responsible AI practices can empower cyber teams to scale security and support business agility.
  • Cyber teams are key to Responsible AI success, embedding risk control into innovation lifecycles.

5 minute read

June 12, 2025

This is the first in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.

AI was just a buzzword yesterday, but it has become the norm today. We've moved past simply talking about AI — the conversation has shifted to AI agents as the next frontier. Innovation in this space is accelerating, and enterprises are quickly identifying practical ways to integrate AI agents into their operations.

At the same time, there is growing concern about the new cyber threats and risks that AI agents can introduce. Standards bodies and regulators are actively working to define guidance and develop regulations to address these emerging challenges.

This lies at the heart of Responsible AI: how can organizations embrace the rapid adoption of AI and AI agents, without letting that innovation come at the cost of security and trust?

How the status quo is changing in cybersecurity

Organizations today operate in parallel worlds. Legacy systems continue to support critical operations, even as the modern environments alongside them are rapidly growing to encompass both in-house and third-party AI tools and agents.

Cybersecurity and risk teams are keeping pace with fast-moving, AI-powered activities across three fronts: within the organization, from third parties and from threat actors.

Product teams are used to living on the frontier of AI capabilities, but now the entire organization is effectively developing software. AI is fueling grassroots innovation, as employees experiment with building their own solutions, sometimes giving rise to a new kind of shadow IT. Business operators are using AI to simplify activities, stay relevant and respond to demands for more impact.

Third-party tools acquired years ago suddenly acquire AI capabilities overnight, creating unexpected risks. These initiatives introduce new challenges for cybersecurity teams, who should now manage both sanctioned and unsanctioned AI usage across the software supply chain.

Meanwhile, AI has democratized the threat landscape. Threat actors are leveraging AI tools to scale and automate their attacks, offering powerful capabilities on dark markets in exchange for fees (cybercrime as a service).

All of these factors require cybersecurity teams to find a way to manage risk quicker.

The opportunities for responsible AI in cybersecurity

Cybersecurity teams have a broad mandate in the age of AI: Use automation to enhance risk controls, streamline complex security tasks and introduce new capabilities that enable the organization to move faster. This has solidified the paradigm shift that has been experienced in other technology advents (e.g., shift left and cloud first).

To keep pace, cyber teams will need to integrate more closely with product teams, embedding themselves within the AI development lifecycle — as both collaborator and innovator.

Upskilling, automation and changing the risk model are also crucial opportunities:

  1. Cyber teams should transition from looking at highly skilled talent as the solution to every problem and instead leverage that talent to create AI agents to take on those activities. AI can help by evaluating incidents, automatically classifying vulnerabilities, triaging threat intelligence notifications and qualifying vendor bulletins to align to potential exposures. AI-powered defenders can adapt faster than they could with traditional approaches, helping them stay ahead of evolving threats.
  2. With each automation win, the cyber team gains the opportunity to pivot to more complex manual tasks that they can also streamline. In turn, they can create fulcrum points where they can further help the business in evaluating risk and applying changes.
  3. Instituting change management and evaluation processes with AI agents allows risk to be assessed automatically. This allows the organization to shift away from operational checkpoints that require legacy reviews to more automated, risk-based pathways enhanced with AI.

Key actions to prioritize

Here are some actions to start building trust, enabling experimentation and strengthening governance in the age of AI.

  • Create AI “playgrounds” for controlled experimentation: Provide guarded, walled-off alternatives to popular chatbots or other applications. Provide a playground or “sandbox” where innovation can happen without risk. Employees are going to push the boundary, so provide a place for them to do so responsibly.
  • Grass-roots use-case discovery: As employees embrace AI-powered innovation — sometimes outside of traditional controls — risks also rise. Employees need to understand the threat landscape, and finding out what they wish they had can create opportunities to offer secure solutions. Use surveys, focus groups, and meeting outreach and apply a use-case identification framework to collect suggestions continuously.
  • Examine data protection and security considering agentic systems: Understand how AI agents may use data in novel ways, potentially exposing it to security risks. Consider updating data access policies and technologies to monitor and control use by AI agents.
  • Deploy AI-enhanced change management: Qualify risk through AI agents that streamline the acceptance process based on control expectations and the goals of the change.
  • Review data security in AI-enhanced systems: Understand that solutions housing the organization’s data are either AI-enabled now or will be in the future. Consider the risks associated with those enhancements.
  • Update (or establish) AI governance models: Make sure your governance model reflects the most relevant cyber risks today — including risks that you might be required to mitigate due to regulations.

How we can help

As AI adoption accelerates, organizations are rethinking the way cyber can enable business agility through the application of Responsible AI. PwC brings a cross-functional lens to Responsible AI, helping clients align innovation with compliance and cyber resilience. Connecting technology to people and process through Responsible AI adoption can drive a change for today.

Rohan Sen

Principal, Data Risk and Responsible AI, PwC US

Email

Chris Duffy

Principal, Cyber, Risk and Regulatory, PwC US

Email

Norbert Vas

Director, Cyber, Risk and Regulatory, PwC US

Email

Karthik Ramakrishnan

Director, Cyber, Risk and Regulatory, PwC US

Email

Follow us