How do you win the game of AI telephone?

Trust: The key to unlocking Agentic AI’s potential

  • Blog
  • 6 minute read
  • November 12, 2025
Bivek Sharma

Bivek Sharma

Chief AI Officer, PwC United Kingdom

Key takeaways

  • Agentic AI is going to be essential for businesses, but letting agents autonomously talk to and, work with each other without guardrails introduces significant risk
  • The solution is handling oversight at the most fundamental technological level—that of code
  • This can actually be an accelerator of growth, rather than an administrative or regulatory burden
Silhouettes of two people standing in front of a large illuminated screen displaying the PwC logo in a modern studio setting

Remember, when you were little, lining up with your friends and whispering a message along to each other, one by one, to see if what came out at the end was the same as what the first person said?

Children play telephone in pretty much every country in the world (although, fittingly, it’s known by a wide variety of other names depending on who you ask). It’s also a helpful metaphor to reach for when discussing agentic AI—or, to be specific, some of the risks of agentic AI.

That’s because agents often operate like a game of telephone—they get given a piece of information, they process it, and then they pass it on to either another agent or a human, until eventually the chain ends. 

Unsurprisingly, this makes agents particularly tantalising for many leaders looking to integrate AI into their businesses. They offer a degree of flexibility and customisation that makes them fit almost any business model, in that you can dynamically swap them in and out of different areas without having to overhaul everything at the same time. What’s not to love?

On the downside AI agents, just like children, can be unpredictable. Some of them might not just make an innocent mistake whispering into their friend’s ear—they might make up something completely different- just for the fun of the game. 

AI agents will also self-learn based on the data they process- so how do you make sure you can course correct if they start veering off in directions that as owners and operators, we don’t want them to go?

“We were in a different world a year ago—use cases were simple. Now it’s multiple agents and complex, outcome-driven transformations, and we have to anticipate the unintended consequences of stitching so many processes together.”

Bivek Sharma,Chief AI Officer, PwC United Kingdom
Video

Keeping pace with AI

6:29
More tools
  • Closed captions
  • Transcript
  • Full screen
  • Share
  • Closed captions

Playback of this video is not currently available

Transcript

Going rogue

We recently saw an airline chatbot incorrectly promise a customer that they could retroactively apply for a bereavement fare, and an independent court ultimately ruled in the customer’s favour. The AI in that scenario didn’t really do anything “wrong,” in the sense that the agent did exactly what it was asked to do: answer customer queries. The fault lies with the designers for failing to put in guardrails that would kick in if certain promises or claims were made during a conversation.

This example features an individual agent acting unexpectedly, but when agents are chained together, potentially performing tasks based on incorrect instructions from the previous agent, then the potential for similar failures is endless: a financial product is pitched because it maximises commission, rather than a customer’s best interest; a content moderation bot crosses the line into political censorship, drawing the wrath of a media regulator; a patient’s treatment is delayed because they were incorrectly triaged when first presenting with symptoms. 

Without strong governance in place, AI agents can end up chasing the wrong goals or taking shortcuts that cause problems down the line. And what if you’re operating multi-nationally? Rules on privacy, bias, consumer protection and more change across borders, but an agent may not realise that passing an instruction onto another agent in the EU has very different consequences than in a MENA nation. The game of telephone becomes increasingly unpredictable, even potentially damaging, reputationally and commercially.

Talking to clients, the three things I always say when developing these complex agent workflows are, how are you testing for quality? What about latency? And, what is the cost? This should be top of their agenda, for good reason. They don’t want a situation where every time they run through a transaction or complete a workflow it incurs a penalty, because even if that’s as low as, say, U.S.$20 per incident, it quickly adds up.

Making AI a success isn’t as simple as taking an eight-stage process and integrating eight new agents in, leaving everything the same. Yes, you could target a simple headcount reduction in gross terms, but it doesn’t do anything to address how agentic AI needs to be handled to maximise impact and minimise risk.

Laying down the law

The solution is “governance-as-code”—quite literally, to integrate governance into agentic AI at its most fundamental level, its code. Think of it like laws of nature. There are lots of ways to make an object fly—balloons, wings, throwing really, really hard—but none of them can ignore the pull of gravity. 

Checks and balances need to happen every time two agents talk to each other, within every workflow, so that bad homework is corrected before it can become the next agent’s orders. Agents should also be checking their work against compliance requirements, regulations, ethical standards, geographical context, and any other essential safeguards whenever doing work. 

Models need to be regularly audited, as well as their datasets, and there should be alert systems which get triggered every time an agent starts acting erratically or inappropriately- enabling somebody to step in and correct this behaviour. It’s going to take humanity's knowledge of what's right to act as a guide- skills such as empathy, compassion and consideration- to keep AI on the right path.

“Governance and transformation need to go hand in hand. It's not just about the upfront, it’s continually having governance built in as code on a daily basis in real time, making sure that models can deal with issues they've never faced before.”

Bivek Sharma,Chief AI Officer, PwC United Kingdom

Make governance your accelerator

It’s easy to make the mistake of not stepping back, of not taking a blank piece of paper, and asking what your business could look like if you were starting fresh today with AI?

Governance is a major reason why—it’s a much easier engineering task to embed it from the start, rather than retrofit. Having positive self-learning feedback loops from the get-go also makes scaling a lot less painful, because your technology stack will grow within those parameters, and won’t have to keep being hammered back as your company scales.

The reality of a technology moving so quickly is that you need to make governance a constant, real-time process – so it becomes an enabler of AI transformation, not a blocker. That’s why I tell clients that I see code as an accelerator—it allows you to deploy at scale with reassurance, rather than risk.

Explore our services

Scale AI for your business

Next Tech Agenda

Contact us

Joe Atkinson

Joe Atkinson

Global Chief AI Officer for the PwC Network of Firms, PwC United States

Tel: +1 215-704-0372

Matt Wood

Matt Wood

Global and US Commercial Technology & Innovation Officer (CTIO), PwC United States

Hide