The rise — and risks — of agentic AI

  • Publication
  • 4 minute read
  • July 17, 2025

Agentic AI — AI designed to take autonomous action and pursue goals on behalf of users — is advancing quickly from concept to capability. These agents serve as digital teammates that can be instructed to reason across tasks, adapt over time and use external tools or APIs to complete objectives. Enterprises are already deploying them in targeted use cases such as customer support and compliance monitoring. In areas like software development and drug discovery, specialized AI agents are already helping teams work faster and get to market up to 50% quicker –– or even more.

Despite their immense potential, early deployments of agentic AI have surfaced concerns. From misinformation — like generative systems falsely linking a professor to a bribery scandal — to biased outcomes in recruiting or content moderation, agentic AI has demonstrated how easily outcomes can go off track. These events make clear that AI agents are not plug-and-play solutions — they need human-led collaboration and oversight.

Even as adoption increases, business leaders are deciding which tasks they trust AI agents to perform. In PwC’s AI Agent Survey, respondents showed greater confidence in delegating tasks like data analysis (38%), performance improvement (35%) and day-to-day collaboration with human colleagues (31%). But trust dropped sharply for higher-stakes use cases such as financial transactions (20%) or autonomous employee interactions (22%). This divergence underscores a growing need for role-specific governance and transparency to guide when — and how — AI agents are introduced into sensitive workflows across an enterprise. For example, to support secure, accountable operations, agentic AI should be assigned only the minimum privileges needed to perform its tasks — aligned to existing identity and access protocols. It will be imperative to monitor activity continuously and regularly review access to identify emerging risks or gaps.

AI agents are gaining ground, but so are threats

Advancements in agentic AI are accelerating, especially in multimodal models that handle text, image and audio inputs together. AI agents are evolving to perform multistep workflows and interact autonomously with external tools and data. This shift is expanding their utility, but it’s also expanding potential risks.

  • Attackers are crafting inputs to hijack AI behavior, overriding instructions or extracting sensitive data.
  • Bad actors are using agents to engage in phishing, malware development and fraud. 

Adversarial testing and red-teaming can help companies address these growing risks by simulating attacks that can uncover vulnerabilities. This is part of a proactive, Responsible AI stance that helps build resilience into AI systems from the start –– and builds trust and drives value. 

Without Responsible AI, companies may face real consequences: reputational damage when generative tools surface harmful outputs, operational breakdowns when flawed models disrupt business continuity, systemic bias if training data skews hiring decisions and — in rare, tragic cases — safety incidents that put lives at risk. The right governance approach helps navigate these risks — so you can act with confidence and lead with accountability.

Implementing AI agents demands proper testing and tuning to the role it is meant to fulfill. AI built to act as a customer service representative requires different safety layers than one acting as a financial advisor. Define your AI's role clearly and tailor safeguards to fit the use case.

What steps can help put the right safeguards in place?

Each company’s approach to agentic AI requires a tailored approach to produce responsible, safe and aligned outcomes. A generic AI model simply won't reflect your operational, cultural and regulatory context. This is where the need for bespoke testing and tuning becomes critical. To manage emerging oversight and risk challenges, organizations should adopt Responsible AI practices and evolve governance frameworks through a centralized, transparent approach that maintains consistency, compliance and alignment with broader digital strategies. 

  1. Tailor with representative data: AI should be trained or fine-tuned on data reflective of real use cases. A chatbot for a healthcare organization should understand medical language, while a customer service representative for a telecommunication organization assistant needs to understand network product offerings.
  2. Run pilots: Piloting with small-scale deployments can help uncover context-specific behaviors, strengths and blind spots. For instance, testing an AI assistant with a select group of users may highlight recurring misinterpretations before full rollout.
  3. Establish monitoring from day one: Track user interactions, flag anomalies and gather feedback continuously through metrics like user satisfaction, escalation rates or failure types to help improve AI iteratively.
  4. Define escalation and governance protocols: Build a clear path for intervention. If an AI agent can't answer a query or if it detects sensitive issues, it should escalate to human oversight. Assign responsibility and build an oversight framework that includes regular audits and model updates.

Trust and Safety Outlook 2025

Follow us

Required fields are marked with an asterisk(*)

Your personal information will be handled in accordance with our Privacy Statement. You can update your communication preferences at any time by clicking the unsubscribe link in a PwC email or by submitting a request as outlined in our Privacy Statement.

Hide