We’re at a pivotal point in the AI lifecycle. Just a year ago, much of the focus was on early exploration. Now we’re production ready, and AI can be deployed across functions- moving deeper into operations and decision-making.
I often find myself having conversations with business leaders seeking guidance on how to adapt to adopt, and I’m about to share something that might surprise a few of you.
Once I’ve laid out the fundamentals—infrastructure, governance, scaling, and so on—I always emphasise how important it is to get things moving now, before the competition does. That’s when I usually hear something like this in response:
“Well, we don’t want to move too fast and cannibalise our own market. We’d prefer to do it in stealth mode, bit by bit, and then when the market changes we’ll hit the big button and activate our new model.”
Now, there are two reasons in particular why I think this is a huge mistake for any organisation to make. The first is that the urgency here is real; this isn’t a situation where you can sit back, wait to see if the first mover fails and only then rush in behind them (presumably in third, or fourth, or even worse place) if they don’t.
The second is that you cannot bring your people, and your organisation as a whole, with you so suddenly and hope to also make the most of the AI transition. People need time to learn to collaborate with new AI tools and technologies—they can’t change how they work overnight.
When explaining this to clients I always advise that “You’ve got to be your own red team.”
Red teams are hired to push, pry, and probe your weaknesses, and then report where you need to beef up your perimeter defences so that, when a real adversary approaches, you’re ready.
“Right now, the adversary isn’t just other companies—it’s obsolescence.”
Bivek Sharma,Chief AI Officer, PwC United KingdomIn practice, this means being brutally critical of everything you hold dear, because that’s what your competition will do—they’re not invested in any of your existing structures or processes, they don’t care about your longstanding company culture, and they have no reverence for your existing model. (This is especially true of startups, but existing competitors are just as capable of stepping back and starting again with a blank sheet of paper). Reinventing their whole experience with AI capabilities- removing all friction points as people interact and buy products in an entirely different way.
The questions you need to ask of your company might have some uncomfortable answers, too, so honesty is essential: How deep and/or broad is your moat? What makes you different from others? Are we going to need to fundamentally reshape job roles? Do we have to be more willing to take risks for the greater good? These are the questions that will give truly useful answers about the investments that need to be made in infrastructure—by which I mean people just as much as compute—as well as other critical issues like how to embed governance and oversight into the foundations of your new AI tools.
AI technologies are evolving rapidly, so you need a culture that can keep up, no matter the cost or the discomfort. Maturing and evolving as an organisation to get to whatever your north star is going to be—although, bad news, that’s also going to be an ever-moving target, and you’ll need to make yourself as agile as possible to keep up.
Leading this kind of transformation means accepting that “leading” will also inevitably change. We’re seeing this happen at companies across multiple sectors—hierarchies are becoming flatter, and “leadership” means more about empowering people to make their own decisions, and their own relationships, with the latest AI technologies. Roles that have existed for decades will shift—for example, a customer service agent might stop taking calls, and instead manage a team of agents.
But the greatest success stories that I’ve seen with clients we’re working with, come in places where c-suites have also accepted that they’re sailing through the same storm as their employees, with their own pre-existing roles and responsibilities up for the same kind of renegotiation. Those are the places which don’t just focus on automating individual tasks, but which foster a culture of human-AI collaboration. People remain people, even if they’re augmented with AI’s help, and they can continue to contribute their hard-won experience and expertise when designing (and even challenging) how new AI models and workflows are implemented.
There’s this kind of “dual track” approach that seems to work well for these organisations: you’ll have your business as usual ticking away, where AI is only incrementally introduced, slowly and steadily in such a way as to keep the disruption to a minimum. But, at the same time, they red team themselves by setting up a kind of startup organisation that operates within the existing one, to eventually compete with and replace the existing model as it starts to lose value.
It’s a brave thing to do, but it allows them to move with real agility—and it gives you a lot of what you need for your future target operating model on a much more accelerated timeline.
“Red teaming has to be a constant process—where weaknesses are repeatedly tested, and critical thinking is shared and encouraged.”
Bivek Sharma,Chief AI Officer, PwC United KingdomThis all has to be a constant process—where weaknesses are repeatedly tested, and critical thinking is something to be shared and encouraged. Otherwise, the pace of change will overtake you.
At the same time, it’s not an easy thing to maintain this kind of iteration indefinitely. And this is a real concern—I’m increasingly speaking with clients who say that their companies have transformation fatigue thanks to the churn of new system implementation after system implementation, over years and years. Figuring out how to combat that fatigue is going to be key for many organisations.
But the need remains. You need to keep stress testing yourself—your customer service, your culture and your technology—like a red team, because those threats will still be there from elsewhere. That includes regulatory bodies, too. The pace of change being driven by AI adoption is bringing with it an equally urgent need to keep up with new governance and oversight. Data security, training data bias, decision transparency, and similar issues can all stem from bad design or poorly considered project objectives.
Governance and transformation have to go hand-in-hand- embedded from day one, operated in real time and tested in real world scenarios. Simulate adversarial scenarios by acting like a red team that intentionally probes AI systems for vulnerabilities—such as data poisoning or cyber attacks. Stress-test agents by exploring edge cases, failure modes, and unexpected inputs to see how they behave under pressure, in challenging conditions- helping to uncover real risks before they become real issues.
It’s about flipping the traditional perspective—about learning to treat failures as a sign that the future is full of possibility for improvement, not empty of success in the present.