TEDAI Vienna 2025

Rolling out agentic frameworks to unlock human potential

  • Video
  • 3 minute read
  • February 18, 2026
What needs to be done to successfully roll-out agentic frameworks across organisations and how can they maximise human potential? 
Video 11/12/25

How can you unlock potential in the agentic era? (Full

1:26:24
More tools
  • Closed captions
  • Transcript
  • Full screen
  • Share
  • Closed captions

Playback of this video is not currently available

Transcript

Hello everybody. Thank you so much for joining us. It we should probably kick this off with a round of introductions.

0:13 

Um my name is Jennifer Strong. I'm a journalist based in New York City. I've been writing about AI nearly 10 years 

0:19 

writing about tech about 20 years now. I've worked for the Wall Street Journal, MIT Tech Review and others. And I host a 

0:24 

show now for public media in the US. And Dez, do you want to kind of go down the line? Sure. I'm Dez Trainer. I'm a co-founder 

0:29 

of a company called Intercom and our main thing at the moment is we make Finn which is a generic customer service for 

0:36 

internet businesses. Hi uh B Sharma I'm the chief AI officer for PwC for AMIA 

0:43 

and so looking at everything sort of internal change and what we're doing with our clients as well. 

0:48 

Hi everyone. Uh my name is Mention Don. I'm a behavioural scientist and I work in 

0:54 

general on like human AI interaction, public attitudes towards AI like in 

0:59 

organisational context, societal context and also cross-cultural contexts. 

1:04 

All right, so it's 2025. The thing that if you're talking about AI, you're probably talking about it's AI agents. 

1:11 

People are very excited. People are overwhelmed, you know, especially teams have been running at AI integrations now 

1:17 

for years and here we go. little new thing, right? More work to do. Um, you 

1:23 

know, there's also a lot of overwhelm and confusion even about what an AI agent is. So, maybe that's where we 

1:29 

should start just for each of us and so people know where we're coming from. What's an AI agent to you? 

1:35 

I I think it's uh well, what it literally is is a programme that can act 

1:40 

of its own accord to deal with certain uh roles and responsibilities that occur in the course of a business. So our main 

1:47 

definition is it needs independent ability. It needs agency basically, hence the name. It needs control. The 

1:53 

business should be able to say what it does and then it needs reliability. It needs to be put in a situation that should behave in a consistent way. And 

2:00 

this trade-off of agency, control, and reliability is probably at the heart of most of the debates around agents. 

2:05 

Yeah, I agree. It's exactly what Dez said actually. So um I think it's exactly that. I think where where people 

2:12 

sometimes get confused is they confuse chatbots with agents and and that's I think that's that's kind of a clear 

2:17 

delineation. I think agents like you said are taking actions. There's a level of um there's a level of dynamism around 

2:23 

how they're taking those actions as well. So it's not a deterministic linear process. So they're actually doing something as well. And and I think to 

2:30 

your point where the debates got really interesting is you could sort of see that um what you could do with agents is 

2:36 

kind of it's it's limitless but the governance you need to put around it the kind of the design kind of pieces you 

2:42 

need to put around it to kind of get to the outcome you need to that's kind of really where I think the topic is now 

2:47 

sort of got to how do you move from proof of concept to production ready sort of a aenic frameworks 

2:54 

absolutely languin yeah I I'm very afraid of giving a definition Because as 

2:59 

researchers where you go to the literature, check what's what's published there, right? But like I think 

3:04 

for AI agents there are few keywords for me they have autonomy. So sometimes they make decisions on your own with not like 

3:11 

without always checking with humans and they make their own decisions but sometimes there are also pitfalls in 

3:17 

terms of who should be accountable for the for the decisions AI agents make. Yeah. Yeah. But I think 

3:24 

even for people who are charged with building you know these agents how we 

3:29 

think about it whether or not we can have a normal conversation about it comes down often to lingo and so 

3:34 

defining it as we go I think would be helpful. What are we actually talking about when we talk about an agentic 

3:40 

framework? What are we actually talking about when we start trying to construct these agents? So so look um I I always kind of think 

3:48 

of what is the outcome and what is the workflow? What is the process? What are you actually trying to solve? And it's 

3:53 

rarely going to be a one agent. I think when you're looking at a framework to say look um give an example maybe 

3:58 

there's a kind of a task within procurement procure to pay and I need to build the entire workflow. There are 

4:03 

lots and lots of touch points. So when you're building an agent let's just say it's the mothership probably sitting 

4:09 

behind it might be um 80 individual agents sort of doing certain tasks and how do you then orchestrate them and so 

4:16 

can you can you can imagine how many things could go wrong with that as well if they're poorly designed. So I think this is kind of outcome driven and then 

4:23 

sort of using then agents and architect them in a way that gets you to that outcome. So so when I think of framework 

4:29 

I kind of think of think of the outcome think of the workflow what is the process you're trying to solve and then 

4:35 

you're almost said kind of engineering and architecting the agent sort of sitting behind it. So whether that's one agent that controls them all yeah 

4:41 

clustered agents then becomes kind of a design piece. As much fun as it might be to think of you know somebody with a headpiece and a 

4:47 

black suit in the back. Now that is among the trends that I feel like I see most is that because every group is 

4:54 

building these things that it's maybe the first time right for their company or their group. So we're all not quite 

4:59 

making it up as we go but there is a little bit of that and the figuring out like how these things are going to work together and collaborate with us with 

5:06 

each other right. Um are there particular trends that any of you are watching right now around these agentic 

5:11 

frameworks and workflows that you find interesting? Um when it comes to like the adoption 

5:18 

pattern, I find like it's probably the right attitude, but a lot of businesses are sort of still a bit justifiably wary 

5:24 

or hesitant. So what often works is picking a smaller like you mentioned say 

5:29 

procurement is like an an 80-step thing like customer support maybe like a 35-step thing as well like in that it's 

5:35 

not a lot of people think this is just one task you you ping the server the server pings back the answer. It's it's not that. It's like you really decompose 

5:42 

these you decompose a customer support into a greeting, conversation, analysis, 

5:47 

escalation, whatever, right? Um so I think one thing that does work for businesses who are looking to like dip their toe in if they're like kind of AI 

5:54 

curious but not ready to commit. Um what we what I'd usually say is like pick an area, pick a small sort of a low-risk 

6:01 

environment to deploy it. if you if it makes a mistake there like you know if you install you know our product on your 

6:07 

you know like let's just say on some small internal tool it's a lot less risky you'll get the confidence so I think this idea of like find an a 

6:13 

smaller low-risk area to experiment to give yourself a little bit of a sandbox before you actually go for a much bigger 

6:19 

like roll out of a full agentic framework which would be which would be like saying let's do all the jobs of 

6:25 

customer service or something like that I think that's like obviously the scarier thing to do upfront of course 

6:32 

yeah for me I think like also thinking about agents I would think about like different levels of autonomy it has 

6:38 

right does it take over like all a particular domain of your task or it's just help you do like some smaller task 

6:45 

and get back to you frequently to check if this is the right approach so yeah I think like that's how I would think 

6:52 

about agenting to different levels okay so with all of that um run up and all of 

6:57 

this said how do we maximise human potential in the age of agentic AI I 

7:02 

anybody want to take a first stab at that? I was going to look at you ‘cause I thought 

7:08 

you Yeah, I can. Yeah. So maybe I can jump in first. Also I would like to think 

7:13 

about different categories of like human potentials, right? So the the first category would be like the mundane the 

7:20 

repetitive task people are doing. They don't like that but they have to do that. So for that kind of task like like 

7:27 

uh like releasing human potential is about freeing freeing their time so they can focus on more creative creative part 

7:34 

right delegating some repetitive task to the AI agent and the other part is more 

7:40 

about this subjective or creative aspect of humans. So how can we use AI agents 

7:46 

as thought partners that by consistently communicating our thoughts and ask it to expand our like the scope of our 

7:54 

existing knowledge. So I think these are two aspects of different tasks. Yeah. Yeah. So so so I'd agree with that and 

8:01 

to add to it I think so the interesting thing right now is and I kind of think this is where the starting point is when we're sort of starting to build these 

8:06 

out. What what what I see a lot of people doing and I think this is kind of the mistake of it right now. They sort 

8:12 

of look at the processes and what they're doing or how they're engaging with customers and saying, "Right, I 

8:17 

just need to take that process and I now need to add agents into that process. I've got these eight steps. So, I I've 

8:23 

got to replicate those eight steps. I' got to get AI to then sort of re-engineer those eight steps." There's 

8:29 

a couple of things I'm missing when they're doing that. The first thing is to say all right actually with AI now and this new capability and what we can 

8:36 

do um um how would you redesign that from scratch is is it really the same 

8:41 

process the the second thing then is also just what's the outcome is it just 

8:46 

a pure productivity and timeout or is this some kind of additional value you can now start adding and I think your 

8:53 

design decisions around kind of how you sort of build that agent then completely change as well and we keep on seeing this again and again I do and sometimes 

8:59 

I do worry about this a bit is that such a huge drive around productivity and cost out and and it's to me I think the 

9:07 

way you should be looking at it because I do worry that you know this is all just about like we've got to take out 30% headcount in these functions or the 

9:13 

rest of it is to say right what is the future of that kind of function or that front office look like and what how is 

9:19 

it going to evolve so even you might not need those resources to do the the tasks that are quite rope to your point right 

9:26 

now but do do they then need to evolve and what's that extra value you're then bringing into the firm whether it's a 

9:31 

back office role or a front office role. So, so I think that design piece and that top down look and sort of giving it 

9:37 

a bit of time and reflection consideration around the sort of design that that's super super important. I'm so glad you bring that up because 

9:44 

from where I sit writing about this tech I feel like I don't know there's this moment 2 3 years ago we were leading 

9:49 

into this moment that we're in now very excited. Okay, so if we have this free time where are the moonshots? What can we chase? What are the things that we 

9:55 

can now do that we couldn't do before? and instead it's like we're years into cost cutting and look we saved an extra 

10:02 

8 seconds on this process. Woohoo. You know so it's almost like um like when 

10:09 

electricity was invented if we're like great we can replace all our candles. It's just like well yes but like there's 

10:15 

a lot more stuff we can do and and in truth if you look at what happened because of electricity like the reason we have like late night shopping the 

10:22 

reason we have a 9 to five working week whether that's good or bad. uh all of those things are downstream of the 

10:27 

invention of electricity. And I just think AI is more like that. there's more upside. Like I think your point was really solid like when you're saying uh 

10:33 

like if we can stream away all of the undifferentiated heavy lifting that humans do by ultimately handing it over 

10:39 

to AI like I I kind of think you know there's a great opportunity of finding out what we can do next with all the 

10:45 

sort of excess human capital that we have and and the least how would you say the least creative the least ambitious 

10:51 

thing you could do is save the money right I think a lot of companies will see the opportunity and saying all right 

10:56 

like in our case now we've got support people what can we have them do we can have them engage customers customer success, higher touch uh stuff. H and 

11:03 

then there's also like where can we apply the technology where previously it was you know impossible to do. So can 

11:10 

you provide functions or services to your to your customers that were previously like not feasible because of 

11:15 

like they cost too much or whatever like there's a lot more things you can do because of AI. It's not just a case of things you have to stop doing because of 

11:21 

AI of course and we'll dig a lot more into that when we start talking about what you know work of the future jobs etc. 

11:27 

where we're going with some of this but did anybody else want to chime in? I do have a question. I'm really curious for a long time. I just don't have the right 

11:33 

audience to ask. So I think there is some like discrepancy or controversy I 

11:38 

see in this like productivity effect of uh agentic AI or G AI in general. So 

11:44 

there are some research showing like um using generative AI could improve your 

11:49 

productivity for example by 40% for the knowledge workers. But on the other hand, there are some MIT research 

11:56 

showing that for 95% of organisations when they try to pilot this agentic AI 

12:02 

uh framework or like GI in their organisations, they fail. So yeah, how 

12:07 

can they like reconcile this kind of controversy is like what's going wrong? 

12:13 

Absolutely. And well, I think at least from where I sit as an editor, if you 

12:18 

edit an article or a book, you kind of trust certain things like the writer knows the names of his children, right? 

12:25 

Um, if there's a pet in the book, probably the pet's real. Um, things like that. And now instead, you actually fact 

12:30 

check every word because if you don't, you'll wind up like the book about Gary Marcus that says he's a famous AI 

12:36 

researcher in Canada and the United States, that he has a pet chicken named Henrietta, which he's a, you know, a little quirky guy. It's possible, but 

12:42 

actually doesn't. And now this book is in the world and it's, you know, baking into models again and again and forever more. We're going to have Gary Marcus' 

12:49 

chicken, I guess. But what does that mean for medical and other things? So, does AI make me potentially, you know, 

12:55 

more effective or more efficient in some spots of my job, but highly inefficient in other parts of it? Absolutely. 

13:00 

I do things the the one thing I would say the MIT report, by the way, was kind of interesting, but I I I wasn't shocked 

13:06 

by it. It's kind of what I expected because there is a maturity of the technology probably. Here's where it is. 

13:12 

Um what's kind of interesting is I think the technology is super sophisticated. If I look at where the LLMs are right 

13:18 

now um you can argue about some model E valves and there's obviously on the on the at the periphery you know some will 

13:25 

sort of um sort of uh some will act at a sort of slightly more kind ofum 

13:30 

advantageous way or sort of just outperform in some respects but they are super super sophisticated right now. Um 

13:37 

the MIT report I think is a reflection of where we are in terms of the adoption of the technology. So I think the technology is almost kind of overtaken 

13:43 

the use cases and the potential. What we're now finding and the technology is kind of catching up a bit. The 

13:48 

infrastructure is now kind of coming in place. And when I say infrastructure, I'm not just talking about uh data centres and GPU clusters and all the 

13:54 

rest of it, but also hyperscale is now kind of building the infrastructure to allow us to roll out um agentic 

14:00 

frameworks at scale. You know, you got to deal with things like identity and MCP services and connections and all the rest in a sort of safe environment at 

14:07 

scale. That's been quite hard to do up until now. And then you've then got the workforce still trying to work out where 

14:13 

do we focus um front office back office workforce adoption. So we're still kind of fairly early on. 

14:19 

Sure. In in the story. So so I think when the MIT report came up and you know there has been a there's a huge amount of 

14:25 

experimentation. There are some businesses bit like what what you're doing Dez where you've got a few platforms that have really really kind 

14:31 

of got sophisticated domain focused and sort of solve those areas but on the whole there is a lot of um PC builds and 

14:38 

experimentation so that you I wouldn't really have expected the return on investment to come in. Where we are 

14:44 

right now though is I think we're now at this kind of tipping point to some extent where we're now getting into a lot of the discussions we're having are 

14:50 

now moving into productionising. Sure. and and I think um and organisations have still got to land I 

14:56 

think in a lot of cases what the ROI is around some of these kind of use cases. So it's a what is the KPI? Is it 

15:03 

necessarily about cost out or is it kind of topline or is it more engagement and 

15:09 

all of those things have got to work out right now? But so so I would say there's a maturity piece right now that we have to go through. 

15:14 

Well throughout history you'll see the two parts of this. It's waiting for technology to catch up and then it's also waiting for people to catch up, 

15:20 

right? Because we needed to invent cars and go drive them around a bit before we realise that seat belts are a fantastic 

15:26 

idea, you know? So, the various steps that we'll take along the road here. Um, 

15:31 

did you want to say something? I was just going to say like I think one of the reasons like it the MIT report about the failure of all these pilots 

15:37 

didn't surprise me mostly ‘cause like you have to baseline this against most things big companies try to do fail 

15:43 

anyway. Like so like there's a there's a baseline there that is like higher than we would want to admit. In the same way like when people say oh my god 90% of 

15:49 

that startups failed and you're like 90% of all startups fail. So like uh there is that piece but I do 

15:55 

think there's a we're in this kind of Goldilocks moment where we don't know how to be how ambitious to be with AI. 

16:00 

Like some some people are extremely unambitious and they just try a little tiny thing once and they call that job 

16:05 

done. And then other people are kind of going for like the whole we're gonna reinvent everything in one step with one 

16:11 

with one single GPT call or something like that. And I think we're we're slowly starting to find our taste and our kind of uh intuitive understanding 

16:18 

of what is AI addressable and what is not yet AI addressable. And I think there will be a lot more failures along 

16:24 

the way until people get a kind of a good grip of what makes in the same way like there was a lot of things we tried to do online in the dot era. We tried to 

16:31 

sell pet food online and it didn't work, right? Uh because it's really heavy and it was really expensive to ship. Today 

16:36 

it works, but like it was just it didn't work back then. And I think we're starting to work out like what stuff is AI attackable and what stuff are you 

16:42 

better off just leaving? Give it a couple years. Let's see how the easy stuff shakes out first. Yeah, I I was going to if I could add on 

16:48 

to that point. I I agree with that. I think the analogy with dot com is really really interesting because I think people are conflating the two things 

16:54 

sometimes. So you hear a lot of people when they're talking about AI hype and they use a dotcom and and I think they're fundamentally misunderstanding what 

17:00 

happened in the dotcom was about overvaluation of bad business models. Um if you were looking at a hypothesis 

17:06 

around what it was was e-commerce going to rule the world well actually that's proven out and it's more pervasive than 

17:12 

any of us imagined and I'd say sort of AI in a similar way right now is I think um the technology is moving at speed, it 

17:20 

can solve huge amounts of problem probably more than we've imagined so far and we've always got to allow ourselves to evolve into it um to sort of work out 

17:27 

where we're going to land on this right now now will there be a market correction have some businesses been 

17:32 

overvalued we don't know but I think the good fundamentals those businesses that have kind of really worked out their 

17:38 

domain and really solved the problems they'll they'll stand I do think we're a lot less exposed that time but it is it 

17:43 

is important because I think some of the lazy kind of commentating around this then sort of it automatically draws 

17:50 

parallels to the technologies not delivering what it's supposed to do. That's just not the case. Yeah. Don't agreed. There's been a lot 

17:57 

of talk lately also about shadow AI and Des brought this up when we spoke last like doesn't actually do you just want 

18:02 

to explain what that is because I'm thinking some of us either may not know what shadow AI is or we just might not we might know what it is but not know. 

18:09 

Yeah, shadow AI. So people using AI but not talking about it or companies then equally or 

18:17 

a lot on this. I was going to go to you next, but yeah. Yeah. I think like also when you were just talking about people 

18:23 

trying to catch up, I want to bring up this point. I think it's not just people's problem about whether they're 

18:29 

willing to learn or not. I think it's also about whether the organisations create the culture that encourage people 

18:36 

to adopt this kind of like innovative tools. So regardless of whether they 

18:41 

make mistakes or not. So like with the shadow AI phenomenon uh we basically uh 

18:47 

reviewed let's say two aspects of the fact the first is that people still have 

18:52 

this negative social perception of others using AI thinking about um like 

18:58 

applying for the job for example HR see that you use AI to craft your CV they 

19:03 

would see you as not making enough effort or not authentic enough so they would penalise you for using this kind 

19:11 

of productivity tools. And on the other hand, when people know that if others know I'm using AI, they would penalise 

19:18 

me. I would turn into using shadow AI. So these are like actually talking about like confrontational dynamics like um 

19:26 

okay, I can develop tools to detect others use of AI. I think it's good because it's transparent, it's ethical. 

19:33 

But on the other hand, if people know that you would penalise me for using AI, I would develop more like anti-detection 

19:41 

tools like to use AI that would not be discovered. So this can go forever. So I 

19:46 

think we think the solution would be creating a positive organisational culture so that people are encouraged to 

19:51 

use AI. So shadow AI is using AI without necessarily owning up to it. And it's not just people who do this. Companies 

19:57 

do this too, right? whether or not you admit that you're using um AI for your customer service in the back end, right? 

20:03 

the transparency for sure like definitely um we have a lot of people who a lot of c prospects who would say we'd never use AI and then 

20:09 

you look at what actually happens behind the scenes and it's somebody copying and pasting stuff out of that you're just like okay cool but more generally I 

20:15 

think one thing organisations can do is um is make it clear to people that it is safe to use AI if you work in your 

20:22 

company and what like so in Intercom for example like if you go to a project doc if it was generated by ChatGPT at the top 

20:28 

somebody will say like I generated this with ChatGPT here's the prompt I used and here here's the input you know, ‘cause we're trying to do two things. We're 

20:35 

trying to normalise the idea that AI is here in the same way you had to normalise like looking up the answer to your homework on Google in 2002 was a 

20:42 

shocking thing to do too, right? So, we're just trying to make it clear like that we know where the world's going. But also, I think getting good at using 

20:48 

AI is just a fundamental requirement for every business, right? And and I think it's really useful that people can learn 

20:53 

from each other's prompts and can see how they actually use AI in effective ways because otherwise they're not like there's not yet great um literature out 

20:59 

there that explains how to get good at AI. But at the same time, all of your companies need to get really good at AI really quickly. 

21:06 

Yeah. And I think it's also kind of ties back to the point that none of this is static, right? Our just because if we 

21:13 

call customer service and get a phone tree, we're hitting zero for an operator doesn't mean we won't buy groceries later and go straight to the kiosk where 

21:19 

we, you know, can self-checkout and not have to maybe interact, right? So, um, Mechan, I would love to hear more 

21:25 

about your work and like how you think about these interactions and the ways in which people are evolving alongside machines. Sure. Make people what? Oh, 

21:32 

how we're evolving. How is our thinking changing in terms of the research that you're doing about human and machine 

21:37 

interactions? What are some of those factors? Yes. So, we also have some so so I think 

21:44 

it's like human interaction can be divided into a few stages, right? At the first stage when I have this product, I 

21:50 

need to decide whether I want to use that. That's the first step step. And then afterwards, I see if it's good for 

21:56 

me or if it's not so good in terms of performance. and then I decide whether I want to further develop relationship or 

22:03 

uh keep using it in my work or life. Right? So I think at the first stage uh what we found is that if you give just 

22:10 

give people this product, it would already improve like people's attitude towards this AI compared to like imagine 

22:18 

my interaction with that where I have no exposure to this AI product at all. So I think like first give people a shot. it 

22:24 

would really be helpful to help people see how how beneficial it is. And at the 

22:30 

second stage uh what we find also it's not our research but there are other uh people researching on that. So for 

22:38 

example um I think AI companionship is also like a very hot topic but there are 

22:43 

also research showing that people usually approach uh general purpose AI with some work related task when they 

22:50 

see this products are really good in terms of like helping me with everyday 

22:55 

issues and then people build a relationship with that and then turn to more general purposes. 

23:00 

Right. Yeah. I think if we we go to thinking about human potential and maximising human potential again right getting back 

23:06 

to there's this tension too because what we humans need from the technology may 

23:12 

not always align very well with what a company needs from the technology right and of course it's companies that are 

23:17 

mostly building this technology how do you think about the those alignments of interests how do we move forward with 

23:23 

that I think that's a very comp complicated question right there like organisations 

23:30 

there are different stakeholders they have different interests. Someone want to improve efficiency but some other 

23:35 

people just want to avoid mistakes, right? So I think it's really important to like take each other's perspective. 

23:42 

So when you try to deploy some AI systems top down, you need to involve the people who actually are infected 

23:50 

impacted by those or influenced by those products and to what we call participatory design. Right? when you 

23:56 

develop this kind of frameworks, you evolve their voices and re their real life experience in the process. So to 

24:03 

build this kind of infrastructure based on mutual understanding and consensus. 

24:09 

Yeah, I think about that a lot in terms of where we're at today. Like how do we build a world that, you know, doesn't 

24:15 

put like the primary focus on say AI that prioritises profit or engagement at all costs over the well-being of say our 

24:22 

teenagers. And I don't really know. I'm sure these are things that you two think about a lot though. 

24:28 

Do you ever um I I do think it's one of the few few 

24:33 

areas I'd concede that we do want some uh degree of regulation is um specifically 

24:40 

what is AI like as a pedagogical partner or like you know whatever phrase you might use for children and teenagers 

24:46 

when they're kind of at at their most um let's just say vulnerable psychologically or most they're evolving 

24:52 

and like adapting most quickly. And I think that's the one area where like I I 

24:58 

worry for example like you know parents that don't talk to their kids kids are going to talk to AI about it instead right so like you do 

25:04 

you do have this challenge where like you know should there be like you know some sort of uh guardrails around who 

25:11 

speaks to AI about what topics and when and should ultimately we release different models for like different stages of development for if if the AI 

25:18 

knows it's talking to a 5-year-old 10year-old 15y old it should and would as we all would as humans speak 

25:23 

differently, you know, like um I'm actually risking and I presume there are no children here, but like let's say if 

25:29 

if a child asked something about Santa Claus and the child was 5 years old, what should the AI reply? We actually 

25:34 

don't know and I'm not going to tell you the answer. Um but you know, like all of these things are like kind of like, you know, we're like we're speeding into 

25:40 

these like that's obviously the the glib version. There are much darker versions. Uh but like we're speeding into all of this kind of like saying AI is here, AI 

25:47 

is the future. Let's go go. But I do think there's some deeper questions we have to have to ask ourselves. But and 

25:53 

like we made we made this mistake already with algorithms and with like you know the TikTok and Instagrammification of society and all 

25:58 

that. I just think we need to be careful not to repeat this at a far um far like deeper level. Uh so I I think it it is 

26:05 

definitely one worth watching and yes it will be very profitable for a business to be like every child's best friend or 

26:11 

something like that. And we don't we don't necessarily want that unless it's a good business at its core. Yeah. And and I'll add if I if I put a 

26:17 

corporate kind of take on this. Um it's there's some real challenges by the way. 

26:22 

Um if I if I'm being really candid with this group um so Chatham House rules apply with all um 500 of you here. Um um 

26:30 

but but when you look at you know think of AI as kind of a Pandora's box. It's open now, right? And then you've got the 

26:36 

geopolitical issues right now as well is that um every territory is trying to outdo the other territory. Every 

26:42 

territory wants more than their fair share of GDP growth on the back of this. They're trying to own the infrastructure. They want to be the 

26:47 

first to grow their kind of application layers. So they want that wealth to reside. So and everyone's going to 

26:52 

regulate this in a kind of a different way now. So you do have this kind of vacuum at the moment where I think um 

26:57 

local governments are concerned about putting any kind of barrier in whatsoever. Um and and and and partly 

27:05 

this is a kind of fear right now that we're going to slow down growth, we're going to slow down sort of our future relevance and all the rest of it. know 

27:11 

so part of this vacuum I think and so a lot of the conversations I have are with SEUs when we're sort of working on the 

27:16 

transformation really interesting conversations and they're all really mindful about this you will also look you're going to get some just think this 

27:23 

is a cost takeout play and we've got to get margins up and this is what my institutional shareholders want on the 

27:28 

whole though the conversations are people are genuinely concerned about what does this mean in terms of 

27:33 

displacement um what's our role in kind of building up kind of future skills and graduates 

27:40 

and sort of pulling them through and so it's on the agenda and people are working. But the only thing you then have to do is then the the challenge 

27:46 

then comes is when you're looking at your models is there is a you know how do we do this in a respectful way be an 

27:54 

inclusive way which we have to do and that is the right thing to do equally when I then look at I do a lot of work 

28:00 

with the VCs in the west coast they are going through sector by sector and they're looking to see where is the 

28:06 

biggest addressable market where can we put a huge amount of money behind a scale up to disrupt entire sectors and 

28:13 

that's kind of what That's the world we're living in. So there's this kind of tension right now and we're doing it to 

28:18 

ourselves. Um we're working with clients where you're almost having to red team yourself. So as much as you've got your 

28:24 

BAU business and saying right how do I sort of take a bit more of an incremental approach and kind of move that with AI in a in a safe and 

28:30 

responsible way. At the same time there's a kind of a dual track to say well what is your future business model 

28:35 

going to look like in two years or maybe quicker than that. And I quickly and the questions I would ask myself is firstly 

28:43 

um with my current competitor base within a sector if one of my competitors did XY Z in the market tomorrow what 

28:49 

does that do to my business? Second question is more about your defensive mo if I look at my business right now and I 

28:55 

had a blank sheet of paper and someone came in and redesigned it completely as an AI first company. What would that do 

29:00 

to my business? And the third thing we look at is what will clients of your business products or services what will 

29:06 

they be able to do themselves that disrupt it? So it there is a real tension there between making sure we do 

29:12 

this in a mindful way and a responsible way but also the speed of change that's coming as well. 

29:17 

There's a tension everywhere with this. I mean going back to your thing with the children I still remember very clearly 

29:22 

when I came home from work one day and Alexa had learned to tell the difference between my children and me which was a good thing given the 

29:28 

questions that were being asked which I could go into my phone and see. And also oh wow my Alexa knows that's my you know 

29:34 

elementary age kids. So what happens to that data? is it now treated you know all the little you know hamster wheels 

29:40 

keep going um I feel like we can go two ways we can go back and really talk about this future of work we could also 

29:46 

talk about governance here because there's a huge role for that as well and I'm I'm wondering if there are any 

29:52 

particular sectors or groups that you're seeing that are you just see as leading on this or kind of paving the way for 

29:58 

the rest of us to follow. Yeah. So, so look, we're seeing I mean look governance is just becoming one of 

30:03 

those certainly in my world and talking to sort of the larger the larger corporates governance is top of agenda 

30:09 

right now and and different layers by the way um there is the kind of look we've got to just make sure we just 

30:14 

don't get this wrong and and the base layer might be sort of regulatory compliance and and that's not really where they're kind of focused on that's 

30:21 

kind of that's relatively set at the moment. It's the we are about to change our entire model. We we want to deploy 

30:28 

something that is going to fundamentally change the way we interact with our customers. So core to our business. 

30:33 

Mhm. Um what are all the things that could go wrong with that? And some of that's just design principles about do we end up 

30:39 

building something that sort of every time we sort of um we run through a transaction or complete a workflow, it's 

30:45 

costing us £20 and then we have to times that by 200,000 every day or bankrupt 

30:50 

the company. So just good basic design and governance and can we scale in a robust way? But then there's also when 

30:55 

you then start getting into the validation of the models and how are you training it? What are the guardrails you put around them? Question abstraction 

31:02 

around your sort of agents and all the rest of it to say um are we giving the right answer or is is are the agents 

31:08 

kind of diverging from their original their original kind of outcome. So there's so many things to just kind of 

31:14 

get right from a governance layer to make sure that you can actually then roll this out. Now the final layer which 

31:19 

is extremely important which you have to do in parallel is to say what are all the unintended consequences to my 

31:24 

employees to what this is doing to wider society what you know I might be solving this and everyone's kind of tunnel vision 

31:31 

about this but there's a broader thing so honestly I'd say governance right now 

31:36 

in the SUITE and the conversations we have are having that that is top of mind there is a there's clearly I think in 

31:43 

parallel to this this fear of you know sort of becoming obsolete and irrelevant um and being disrupted. So they're kind 

31:50 

of like hand in hand, but doing that in a way that doesn't cause irreparable damage. The two go hand in hand. 

31:56 

I mean, which sounds wonderful, but how? Yeah. Yeah. Yeah. 

32:02 

So, um you had thoughts the other day we were talking to about job displacement and how like various roles will merge or 

32:09 

like where are we going and and what kind of advice would you give to others trying to put their toes in? 

32:14 

So I think like because of where we work, we're kind of at the forefront of like of this changing nature of 

32:20 

employment, job displacement, etc., which is never great for like press or whatever. But broad strokes, what's 

32:25 

happening is as people adopt agentic customer service, it's not the case that they go and have this big epic layoff 

32:32 

where they fire a lot of people, but it is the case that they stop hiring in customer service. And customer service 

32:37 

is in loads of cases an entry-level position. So as a result, the team can kind of right size pretty quickly 

32:43 

because people just, you know, people leave that position pretty quickly because they only want to work in it for a year or so. But what we're seeing 

32:48 

happen in the orgs that are like post AI is that there's been this type of like 

32:54 

generalist renaissance where what we have is people who are now really good at AI and really good at applying AI to 

33:00 

problem domains. And we're seeing that like people who have adopted our product will then be able like all right I'm 

33:06 

going to go and try and objectify marketing or I'll go and identify customer success or sales and they're applying what they've learned uh in more 

33:12 

areas. So I do think what we're seeing is like people realising that like you know AI is the skill you need to learn 

33:19 

and your your function might just give you the opportunity to learn it. It kind of reminds me you might have seen the movie Armageddon. This might sound weird 

33:26 

but give me a second. So, there's a movie called Armageddon where Bruce Willis and Ben Affleck are in it. And 

33:31 

one of the jokes on the DVD director's cut is that Ben Affleck says, "Hey, in this movie, we're training a bunch of 

33:37 

oil drillers to become astronauts." And I'm just wondering, wouldn't it have been easier to train astronauts to become oil oil drillers? And like that 

33:44 

kind of invalidates the entire sort of idea of the movie. However, the same thing's happening with AI, right? I 

33:50 

think we need to learn AI. And I think what we'll find is when we learn how to use AI in various different business 

33:56 

contexts that'll be an immensely valuable business skill that you can apply and actually that bit might have 

34:02 

more in it than handling like sales prospecting or lead qualification or or 

34:08 

whatever sub area you might try and actually apply the AI. So I think one of the things that's that we see in these 

34:14 

teams that are left behind is you got like a lot of like I would call like principal level extremely skilled support people because all that's left 

34:21 

is hard work. So that's one one way which is changes but then everyone is now AI native and they're getting poached into other parts of the 

34:27 

organisation or other organisations because everyone's realising we need people who are good at AI. So I think there's there's there's an immense 

34:33 

opportunity in being one of those people and I think it's a that's frankly it's a better career strategy than trying to 

34:39 

resist it. Yeah. Bang, you look like you want to jump in. Yeah, I I feel like also maybe related 

34:45 

to the cases you were talking about. I think like when people approach AI, there are two different mindsets, 

34:50 

right? One is automation, the other is augmentation, right? So when I think about if I use AI, I would eventually be 

34:58 

replaced by it because it trains on my data, right? I would probably have a different way of using it as compared to 

35:05 

I know this tool is this tool is helping me to improve my productivity. So and 

35:10 

also I think there are some reports saying like in terms of job replacement AI is merely replacing junior positions. 

35:17 

So how can you expect this people to uptake these tools when they know like they will be replaced by using it? I 

35:23 

think there are like some dilemmas in between like from psychological perspective I would say. 

35:29 

I mean I I think the the junior position thing that comes up a bunch but I'll just sort of say the spectrum of like AI 

35:35 

is like augment your existing people doing their current job which is like what a like co-pilot type technology 

35:41 

would do, right? It's like it's the same person but they're just hopefully faster because there's AI beside them. Then 

35:46 

there's like replace some of the work entirely and then there's ultimately replace all of the work entirely. And that's so you kind of have like are you 

35:52 

speeding them up? Are you reducing the amount of them? Or are you replacing the organisation entirely? When it comes to like the the like I I I worry less about 

36:00 

like a a very common phrase we get is like um will anyone ever hire an entry-level position uh ever again, right? If 

36:07 

it can all be um like automated through AI. I I I think it's possible the answer 

36:12 

to that question is no, but I don't even I don't know if that's necessarily the right question. I mean ultimately if we 

36:17 

take say engineers right AI is doing all of the easy low-level engineering and it's not doing any of the hard stuff now 

36:24 

so we should be like good e we'll never hire an engineer again but we still need more senior engineers so where do they come from and that is either a challenge 

36:31 

that we have to solve and in which case we're just going to go and have to hire junior ones and train them or it's a 

36:37 

challenge that like educational institutions will have to solve which is now that we have AI is it possible in a 

36:43 

four-year degree to produce a senior engineer uh because you know in theory you know a lot of the boundaries around 

36:49 

tuition or one-to-one uh coaching whatever have been replaced because of AI but I think like ultimately like the third 

36:55 

level institutions and college and education is like one side of a two-sided market and the employers are 

37:00 

the other if the employers don't want the graduates anymore something has to give either the employers need something so either they go and buy and they train 

37:07 

or these guys work out how do we get people who are more akin to what the industries are in but I think that shake 

37:13 

out has to happen but I but I I don't think the conclusion will be universities spitting out people that no 

37:18 

one wants to hire and at the same time companies having to take their existing ageing employee base and retire or 

37:24 

whatever like something will have to break, right? Yeah. Look, and we're already seeing this right now. We're we're already doing a huge amount of work mapping out 

37:30 

um this is not about do we take on don't we take on that to not question we we we will absolutely be um recruiting. We 

37:37 

have to otherwise the entire sort of machine sort of falls apart. Um but it's just it's then what we're already 

37:43 

working on is the what are the skills and what are we expecting a first year associate to do. So as we're working 

37:48 

through that what we're already seeing is that um um and I I've seen this through my career what I had to do as an 

37:55 

associate there was almost a kind of a right of passage where you did a lot of photocopying and invoicing and all the rest of we got up now that's just 

38:00 

disappeared with technology where it's now got to. I think the expectation probably will be that when you sort of look at um across cross industry is that 

38:07 

associate will probably be fast forwarded a couple of years through that process. So they'll be doing higher value work. So the kind of the tools the 

38:14 

knowledge um and the kind of platforms they have sitting behind them will enable them to produce more. But then 

38:20 

this has this kind of effect across the board. Now at the same time you kind of look at you know entire business models 

38:26 

changing as well. So that'll then also start dictating what what's the shape of our entire workforce anyway. this this 

38:32 

idea that we'll sort of carry on with this kind of lovely pyramid just be that's not what exists it fundamentally 

38:37 

change I mean we look at it I mean you know PwC it's a giant partnership um it's a giant partnership that's kind of work 

38:43 

we've we've had this kind of pyramid shape we look at that fundamentally and we can see that's that's got to change 

38:49 

and what is going to be expected at those grades and do we even need all those grades and you know are some of 

38:55 

them going to be consolidated all those things have got to be worked through at moment everyone's great at pushing the AI 

39:00 

responsibilities downwards like as a the manager is like oh you need to use AI cause I expect you to be extremely 

39:06 

productive. Me however I'm actually grand the way I am like it's a very common management reflex behaviour. 

39:12 

Yeah. Well so everybody on this stage and we've only got about 5 minutes before we switch over to your questions. 

39:17 

So I hope everybody's starting to get those questions ready. Um everybody on the stage we're thinking about the stuff 

39:23 

all day all weeks years. There's probably a few folks out there who are trying to get started with whether it's 

39:30 

frameworks for agentic workflows, whether it's governance and so I think we should use our last few minutes to provide a little bit of advice from the 

39:36 

experience that you all possess here. Why don't we start with because we were just talking about governance. Someone's 

39:42 

trying to dive into this. What are some best practices? What are things you're seeing out there that look pretty good 

39:48 

and would be nice to know if you were just starting? Can you see anybody? 

39:55 

Yeah, I I think if I remember correctly um so I think like in because like AI 

40:01 

companionship is now getting a more and more dangerous room, right? I think there are some like new legislation in 

40:07 

California. So for teenagers they try to separate chats for teenagers versus 

40:12 

adults and for teenagers we use that they will receive like a reminder uh about 

40:17 

like you're talking to an agent not actual human for for teenagers to remind 

40:23 

them you should go out to build human relationships. So I think this is probably a good move and we will see how 

40:30 

it works out. Thinking of governance within organisations as well. But yes, there's a lot of things moving and shaking right 

40:36 

now around many different uh parts of this. Yeah, I guess the advice is a couple of things. One is the kind of the no 

40:43 

regrets bit. I have this conversation with SEUs all the time which is look um there's lots of conversations going all 

40:48 

around about which use cases where do I focus where do I start? Should it be front office? Should it be back office? 

40:53 

Should it be margin related? should it be topline new marketers and and that's fine and there's kind of a structure to 

40:59 

how you can have those conversations to see where you double down. Two things I'd say that kind of no regrets piece is the getting the infrastructure in place 

41:06 

and the governance layers in place to allow you to then accelerate. So, and when I sort of say that I see governance 

41:11 

as an accelerator, not as something that slows it down. I'm not talking about risk management and let recruit lots more risk people to say no. I'm talking 

41:18 

about can you build an infrastructure that is in place and a governance that allows you to then deploy at scale and 

41:24 

production. Right? So that's one piece of it. The second bit to that though I I would say is organisations 

41:29 

um don't don't sit this out because there's still a there's a lot of conversations going on right now which 

41:35 

is I'll wait till somebody else has gone through this and got it wrong and I'll just kind of be a fast follower or kind 

41:41 

of coming third or fourth in my market. I think organisations have got to really start building some muscle around this 

41:46 

and evolving their culture and their workforces. So getting out, experimenting, and starting to try to 

41:52 

solve some problems with AI, even if where you're starting as more of a human augment layer first is super super 

41:58 

important because you're there's there's you're going to need to mature and evolve as an organisation to get to 

42:04 

whatever you your north star is going to be, which by the way is going to keep on moving. and and I let me I'll reference 

42:09 

a conversation I have and these conversations are there in people's minds which will shock most people but you still have conversation with people 

42:15 

just saying I don't want to do that because if we do this too quickly we're going to cannibalise our market and so 

42:21 

so we don't want to be the ones we we we get that somebody else might come in so let's do all the work in parallel in 

42:26 

sort of stealth mode and then when the market changes someone makes an announcement we just press a red button and we're already going to model 

42:33 

not it's just that you we're all sitting here smiling at it it's not it's not feasible and you haven't brought the 

42:39 

organisation along with your change. So so I think that those are the two things I would recommend to any organisation 

42:44 

right now. I wouldn't I don't have a lot to add to governance but I'd love to just piggyback that comment and say like one 

42:50 

of the things we had to do to survive uh over the last few years was kind of rebuild the entire company around AI and 

42:56 

it was it was brutal. It was painful. It was messy. Um we spent like $100 million. we had to we put like hundreds 

43:03 

of millions of revenue at risk, but we did it all kind of knowing that like either we were going to be the company 

43:09 

that killed us or somebody else was going to kill us. Like there was like there was no in between. So it was it 

43:14 

was a very much a big swing but it really did require like a root and branch re-evaluation of every single 

43:21 

function like marketing, sales, finance, HR, product. And we'd ask ourselves like 

43:26 

if you were to reimagine your entire business and your entire function today in an AI native world thinking about the 

43:32 

next 10 years, what would it look like? And the distance between where you are now and what you what you should be or 

43:38 

what what it looks what it would look like is literally the the project we have to manage ourselves through. And we have maybe a year to do this before all 

43:45 

the next wave of competition shows shows up. And that's what we've been like uh focused on for the last two or three 

43:50 

years. And even though things are going quite well right now, I I can't even say we're through it yet. It's like it's it's a incredible time. If you're at 

43:56 

risk from AI and you're trying to slow roll it, you're you're probably just writing your own very long slow death 

44:02 

warrant. Yeah. Well, and when you brought up the accelerator as governance, I mean, that's I would say as an observer from 

44:08 

the outside. Again, that's largely what I feel like I've seen like the high-risk industries, no surprise, tend to be pretty good at thinking about 

44:14 

some of this. But IBM was another company where when I first started covering governance, I had a aha moment 

44:21 

um speaking with their head of it going, "Oh, right. what she's doing right now, what they're working on right now ensures that they can confidently know 

44:28 

when that hard ethical question hits them what the answer is because they spent the time and effort to make sure that this aligns with who they believe 

44:34 

they are as a company, what they believe in, what their work is. It's like really they've dug into the hard questions and 

44:40 

codified it, which is interesting for this moment where we keep thinking that this is like the moment of short attention spans and 15-second video 

44:46 

clips, but it's also we've in my lifetime not had a more important moment for critical thinking, hard questions, 

44:52 

deep thought, like the real work in some ways of life. And it's like a great time to be an academic, right, to really 

44:59 

study these things. Um, who in the audience has questions for this amazing panel there? Oh, there a lot of hands. 

45:06 

the mic runner can start running some mic around. 

45:11 

All right. 

45:17 

Um I want to hear more about the shadow AI um not just for employee but even for 

45:25 

students for example what they are doing and how the schools can 

45:30 

solve this. Anyway, thank you. Anybody? 

45:36 

Yeah, I think like also in educational system is very complicated situation like there are some schools trying to 

45:42 

abandon students from using AI because they want really want the students to learn things by heart. But there are 

45:49 

other schools are trying to develop evaluation uh standards if people if students are 

45:56 

using AI and trying to teach them how to uh how to use AI efficiently. 

46:03 

Um yeah, I think like my general hunch is that you can't really just tell 

46:09 

students not to use AI because they're not really incentivised for that, right? they are incentivised to finish their 

46:16 

homework as soon as possible so that they go they can go out with their friends, right? So I think we need to 

46:23 

accept the reality and maybe teach students like find a good way to like 

46:29 

make sure find a balance between learning and finishing the homework. For 

46:35 

example, there are also some research showing that if you um like first have a 

46:40 

rough idea about what you want to write for your thesis and then uh ask ChatGPT 

46:46 

to sync along with you. The output would be much better compared to you 

46:52 

directly go to ChatGPT asking for an answer in terms of both learning and the 

46:57 

diversity of this um student output. So I think there could be a lot of work um 

47:03 

in the future showing like how students can really um get the the best from this 

47:11 

uh kind of tools uh without just using them secretly. Yes. Thank you. Next. 

47:19 

Hi. Um, I come from a company that's providing multi-agent systems to large scale enterprise businesses. And I'd 

47:26 

love to get the panel's view on how we should be looking at agent evaluation in a scalable way with kind of human 

47:33 

benefit in mind. Sure, I can start. Um, I think the most 

47:40 

important thing to do is to have a really deep evaluation framework for like success and quality of the agent. 

47:46 

Ultimately, all agents that you're going to buy will have two success factors like one is like how much work does it 

47:52 

do and the second is what to what quality does it do that work and I think 

47:57 

um what you'll see is like uh if you give a very simplistic evaluation like 

48:02 

if you ask if you're like trying to buy a customer service chatbot and you ask you know you've got three three 

48:08 

candidates and you ask them each how does a user reset their password? Well, guess what? They're all going to be able to answer that question. So there'll be 

48:14 

no meaningful difference between the three. It's kind of like giving me an Einstein mathematics exam, but that's 

48:20 

just based on addition or something like that. Of course, doesn't mean I'm the same as Einstein. So the hard part is to 

48:25 

work out what is mastery in your domain and make sure that that that is reflected in your evaluation. Uh that's 

48:31 

the first thing. So making make sure that you're asking it hard questions. Make sure that you're tempting it to hallucinate, tempting it to break out of 

48:37 

its guardrails. Uh so that you can see if it is actually obeying or if it's just being overly aggressive. And then 

48:44 

your comparison, the one of the flaws I see a lot with AI is people compare AI with the best possible outcome as 

48:51 

opposed to the average human. Uh you see this a lot with say driverless cars where people say a whimo killed somebody 

48:57 

in Texas and like they ignore the fact that like 67,000 people died on roads in unrelated you know like your your target 

49:04 

should be is this better than humans in their current behaviour not is it better than the best human who's ever lived. 

49:10 

And so it's important to understand what you're actually replacing and then have your own like if you want to shoot for 

49:15 

superhuman fine do it but like do it like knowing what you're doing and then um and then so the main thing about 

49:21 

picking agents is like running a really good bake off real world scenarios real hard evaluations and then real fair 

49:28 

expectations of what you're trying to achieve. And then lastly, it should be depending on the use case, it should be pretty easy to tie it to some ROI 

49:35 

because you're either displacing some labour or upleveling some labour. Like things are either happening more or better or cheaper or faster. But usually 

49:42 

one of those things matters to a business. So it should be easy enough to connect the dots between that and an actual ROI. That's how I'd go about it. 

49:49 

And and I guess the what I would add as well. So clearly where where we've got a history of working with a lot of 

49:55 

scale-ups right now where we've kind of started working with them early and sort of been through the journey. Um, as a 

50:01 

sort of in your field, I guess there's a couple of things I would look at. What one you're going to have is a lot of thin rap companys coming out saying, 

50:06 

"Actually, we've got agents that can do the same thing." And essentially, it's the NLM doing most of the heavy lifting. 

50:12 

So, part of the part of your role should be taking the clients, your customers through that evaluation to sort of show 

50:18 

them why what you've got is kind of charlatans over here and what you've developed has got some real domain 

50:24 

knowledge and there's some real substance to what you're doing. So I think that is that is going to be super super important. So I do think that 

50:30 

evaluation point to yours and what that outcome is because we tend to find that some of those use cases are quite um 

50:36 

where where they really shine because I think you know LLM out of a box can do a huge amount and they're credible and 

50:41 

they sound good. It's when you get to the real nuanced pieces that a lot of times when you're selling sort of um 

50:47 

selling agents for a certain domain area, it's getting into those super super complex areas where you might be 

50:53 

saying well this is a kind of a I'll make one one of the things that we're looking at is sort of we've built um foundational tax models for instance and 

50:59 

then the agentic framework around it. we've been working with the likes of Harvey, everyone in the market, 

51:04 

competitors will come in saying we could do exactly the same really hard to distinguish yourself and then you'll have a lot of noise in the markets that 

51:10 

sort of say right um actually GPT-5 now with reasoning can do all of this as well. So being really really clear on 

51:17 

what's different about your agents, what's the data you're granting them, what's your IP, have you trained it and 

51:22 

then sort of showing the when you ask this type of question now actually do an 

51:27 

eval and be really open about it and that's what really really is and I just on that like if you're doing any sort of marking or scoring the way 

51:34 

you you can um address concern is like take all the easy stuff entirely out of 

51:40 

the evaluation material cause what we see a lot is people say well here's 100 questions you guys got 94. These guys 

51:46 

got 63 and there therefore we only see a 30 point difference. I'm like but like 

51:52 

another way of saying that is ChatGPT gets 60 out of the box. So like let's just knock the first 60 out and now now 

51:58 

it looks like we scored 40 out of 40 and they scored zero. Now it's not compar comparison. But I think you have to 

52:04 

really educate the you have to make them a conscientious buyer who realises what's actually hard to do and what's 

52:10 

easy to shell out to ChatGPT. Yes. Can I sorry can I also add a point maybe 

52:16 

I want to also hear your opinion. I think like you were talking about business models but I also wonder I think it's also like a ethical question 

52:23 

like especially multi-agent systems who should be held accountable for like the decisions or outcomes if it makes 

52:29 

mistakes because it's a complex systems right eventually um like who should be accountable if it 

52:37 

makes mistakes and we should also maybe have human oversight or human in the 

52:42 

loop even it's just agents interacting with each other because there are also some research showing 

52:48 

uh like um agents prefer talking to agents compared to humans. So maybe 

52:53 

there could be like a dystopian scenario like humans are marginalised 

52:59 

by this agent decisions. I don't even know if that's uh dystopian. I think I think that that'll 

53:05 

probably happen in some ways. um like you know just I think who should be held accountable like it it will it will like 

53:12 

factually will depend on the contract you sign like if we hire a PwC to build something and they say it works a 

53:18 

certain way then obviously there's implications to that if you buy a product off us and we give it to you and say this is what it does and it doesn't 

53:24 

do that there's implications but like in practice in a lot of these cases people will make last mile tweaks and changes 

53:30 

so like you can tell our bot to speak a certain way and you can tell it to speak ways that maybe it shouldn't have spoken 

53:36 

And so like the I think ultimately from a business perspective like you're you're going to procure tooling and 

53:43 

deploy it and like like the same way you buy anything else and deploy it and if it gets if it mist if it you know if 

53:48 

there's a mistake it's going to be on the business. That's just the reality of it. But I do um I do genuinely believe 

53:54 

that um in multi-agent systems I think you will if you've got agents from different companies like all of these 

54:00 

things are going to be quite strategically aggressive to each other. So, I don't even know if they're going to get along with each other, but 

54:06 

they're definitely not going to let the human in. So, I think um you you're going to want uh proper like either like 

54:13 

harsh boundaries of like where where you don't deploy and where you do deploy each of them or you'll have extremely 

54:18 

strict rules of engagement. But I I think it's likely if you have a sales marketing and a support agent, the three 

54:23 

of them are all going to be trying to do their job and it's going to be a nightmare for the customer when all these pop-ups come up at the same time or whatever. I can see like lots of that 

54:30 

will happen in the future. Yeah. Well, you also see a bit at least because what we have now to work with, 

54:35 

right, think customer service, we'll see the courts begin to hold the companies in some cases accountable. Think Air 

54:40 

Canada when their chatbot told somebody that they could get a refund for an airfare to go to a funeral. And then Air 

54:47 

Canada replied, "Oh, well, of course not." And the judge ultimately said, "Well, if your customer service is your 

54:53 

customer service, you have to stand behind it. You can't just not provide people an option. Accept this and then not accept what it offers kind of 

54:59 

thing." So some of this will also work its way through systems which by the way happens in normal humans too you know we have precedent 

55:07 

for this you know uh who has is this white going it is can you hear me cool I think this 

55:13 

question is for Dez Dez I listened to your uh your pod on cheeky pint which was really good thanks for uh doing that 

55:20 

um as as a lot of companies think about sort of cost and price right obviously these large language models are quite 

55:25 

pricey it sounds like when I hear what you're doing with Finn you're building your own models, you've built your own 

55:31 

AI team. Two questions for you. One, how do you think about cost, right? And for 

55:36 

people who are building agents, how do we build lower cost agents? Because as we have more, as they run for a longer 

55:42 

period of time, the cost bill goes up, right? So that's number one. Number two is how do you think about pricing sort 

55:49 

of the charge model, right? And you talked about this with John a little bit about um you know outcome-based pricing. 

55:56 

Um can you comment on those two things? Thank you. Certainly. So, um I think uh so basically the background here is this 

56:02 

is the first time in our lives that software has cost money, right? Because in when we're building features, we have to go like huh, can we afford to run 

56:08 

this? And in even in the times slot we've been working with with uh with customer service, there are features 

56:14 

that we thought would be brilliant that we just couldn't afford to build. Like let's analyse all of your conversations in all history ever. That's a really 

56:20 

expensive thing to do. And then the token cost comes down and all of a sudden at some point these things become affordable to do. But we have to pay a 

56:26 

lot of attention to the cost because in our you know most software startups we run at like a 70 or an 80% margin. Uh a 

56:34 

lot of this stuff doesn't to put it bluntly. Um so we care a lot about like cost and it's very related to your 

56:39 

second question so I'll address them together. So we charge when the AI agent works when it does the job that you 

56:45 

hired it to do which in our world is resolve to support conversation. That's the only time we get paid. Now, we burn 

56:52 

a lot of tokens on things that didn't work. We burn a lot of tokens on things that do work. We spend a lot of money 

56:58 

and we still have to preserve our margin. So, we wanted to align our pricing with what our customers think they're buying, Office, which is an 

57:04 

agent that solves uh customer support queries. So, that's why we we charge 99 cent per answer. The margin is somewhere 

57:11 

between I think it floats between like 60 and 75%. depending on a lot of different uh variables like how hard 

57:17 

we're working, the types of calls, phone call versus text, you name it, right? Um I think in general the wrong attitude 

57:24 

here is to say don't worry about pricing, the models will get really cheap, right? That is true, but every 

57:30 

time these things get cheaper, people do it a lot more. Like it's that like I can't remember there's a paradox, but Jevons paradox I think is which is like 

57:37 

as things get cheaper, you tend to buy more of them, right? Uh and that like means that uh you know in our world 

57:43 

tokens come down so all of a sudden we can do more stuff. So we do more stuff. So then our margin doesn't get any better at all and our CFO is pulling his 

57:49 

hair out be like when is the when is the great break going to happen? But I think um if I was to advise anyone on what I'd 

57:56 

say is like look you have to have good telemetry on your costs and you have to make sure that you have a path to profitability. You don't need to start 

58:02 

profitable but you need to have a path there. And it can't be just hoping the model guys figure it out because these 

58:07 

things you you you it could be that you're just quite inefficient. And then separately, if you can align your 

58:13 

pricing with exactly what your customers are paying you for, that's great. Not everyone can do that. Like if you're selling like artwork, there's no right 

58:19 

answer to, you know, show me a photo of a cat dancing or whatever. Like there's no perfect answer to that, but there 

58:25 

usually is a good approximation of customer service. That's why we were able to do it. Okay. 

58:30 

And next question. Hey um majority of the headlines in the 

58:36 

newspapers are about that how many people will lose their job as AI especially agent whatsoever 

58:45 

at the same time 99% of the enterprises 

58:50 

in the EU are small medium enterprises and two-third of the people are employed by them do you see the same 

58:58 

in the small medium enterprise sector people will lose their because of AI or agent APKI or any 

59:05 

thoughts on this? Zik, I can I can give my view on that. Look, I I think it's going to be it's 

59:11 

going to be highly dependent on what the theme does and just how easily repeat 

59:17 

because if I if you talk to um small business owners, I mean, the biggest cost right now is labour. So, if they can 

59:22 

save money and get kind of get to that productivity piece, um they're probably going to take those decisions. If I'm 

59:27 

being honest, I'd love to kind of give a rosier um answer on that. However, I guess the flip side to that is I think 

59:33 

one of the nice things about AI is it does potentially level the playing field in some certain areas as well. So, if kind of used correctly, it allows them 

59:40 

to kind of scale in a way that was would otherwise be quite difficult. But I mean, look, it's a really kind of it's a 

59:46 

it's an important debate to be had because I do think um I know everyone everyone wants to shy away from job 

59:51 

displacement and it's easy to just say look it's like every other wave that you 

59:57 

know when the wave came in there's a huge raft of new jobs and skills that sort of came in and there'll be some element of that. Um but if you really 

1:00:05 

kind of think right are there going to be certain industries and certain types of roles that are going to be affected by I think that that's right now the the 

1:00:11 

bit the bit I'd say and this is the kind of conversation I'd have with any employers and it's it's harder for memes 

1:00:17 

by the way because cash flow is a much much bigger issue than for a large corporate but there is an element of 

1:00:23 

sort of playing some of this out and just saying right before you take um a very binary measure which is right right 

1:00:29 

now my business does X and um and if I now adopt these agents, I can now lose, 

1:00:35 

you know, 20 30% of my head count. It's just kind of thinking, right? Um, it's not value accretive, right? That's just 

1:00:40 

I'm going to carry on doing exactly what I'm doing today, but I'm going to do it with less of a cost base. The problem 

1:00:46 

you have with that short term, by the way, because everybody else in your industry will do exactly the same thing. 

1:00:51 

Then you're going to be back to square one again, but with less people. And so one of the things that everyone has to 

1:00:56 

be really thoughtful about is whether you're an ME or a large corporate and appreciating much much harder for SMEs 

1:01:02 

because of all the pressure they're under around sort of their cost base is um what can I do with these agents to be 

1:01:09 

more value accretive? Can I increase my market base? Can I go after a brand new market? Can I offer a a kind of an 

1:01:16 

additional part to this service which allows me to sort of charge more money? and and people are going to have to be quite inventive, quite entrepreneurial 

1:01:23 

around this rather than go straight to lowest common denominator because by the way everybody 

1:01:28 

will do the same thing and all those advantages are pretty short-lived. So more questions. 

1:01:37 

Yes, thank you very much. Um thank you so much for your insight so far. Uh my question is you've touched on a few of 

1:01:44 

the mistakes that um company companies have done in rolling out agentic 

1:01:50 

frameworks like in the first few phases from your experience what are the most 

1:01:55 

common um mistakes or um maybe visions 

1:02:01 

or ideas or culture traits that uh to look out for. 

1:02:06 

Great question. Uh I would say uh well the biggest 

1:02:12 

challenge you'll have is like usually cultural inertia like you people in the company don't want to move to an world 

1:02:18 

just not for any real reason other than people don't like doing things like you know it's you know in general most 

1:02:23 

people aren't going to wake up and randomly decide to reinvent their business. So you really have to be forceful about saying we need to 

1:02:30 

transition into being an AI company. we need to use AI in all the ways that we can otherwise to Beck's point like your 

1:02:35 

competitors will and then you're just going to be like in a weak position in the market. So I think the first thing is to like just push through a lot of 

1:02:42 

the resistance, a lot of the what abouty or a lot of the like you know here's reasons why we shouldn't um just make a 

1:02:48 

strong decision and and push it through cuz otherwise you don't have a lot of time to debate. And then the second 

1:02:54 

thing I think is just to be be suitably ambitious for what you can do. And I say suitably cuz a lot of the pilots do fail 

1:03:01 

cuz I think the ambition was over the top. was kind of like we're going to like we're going to be a billion dollar company with only one employee or 

1:03:07 

something crazy like that like that might happen but you're certainly not going to get there through a reorg. Um I 

1:03:12 

think uh I think so suitably ambitious to me means identify the highest point of leverage in your business whether 

1:03:18 

that's growth expansion opportunities through doing things or offering new products or services that you couldn't before AI or if it is a cost takeout 

1:03:26 

like just you know getting more efficient identifying those things and then like very firmly and deliberately 

1:03:31 

going after them like have the right measurement in place do your proper like you know bake off or however it is an 

1:03:37 

RFP however you're going to evaluate your winning agents or your winning AI providers and then kind of rigorously and 

1:03:44 

methodically with good discipline move through the gears and see what type of leverage you can get. I think usually what happens is people don't do that. 

1:03:51 

They get a little bit like kind of loose. They buy five different things. They, you know, and they they stop each 

1:03:57 

of them short of their full potential and they kind of wonder like why it all didn't work. I think this thing does work, but you're better off taking like 

1:04:03 

one problem at a time and nailing them down and I starting with the biggest opportunity basically. Yeah, I I think 

1:04:10 

I'd agree with that. The thing I saw probably earlier days and less so now as we're kind of markets were cheering a 

1:04:15 

bit, there was almost this kind of um bravado around uh with organisations around how many use cases it had. 

1:04:21 

You'd had every every day you sort of have someone just saying, you know, organisation X has identified 7,000 use 

1:04:28 

cases or or whatever and it almost became kind of competitive. Um and and I think to your point um then what we kind 

1:04:35 

of saw kind of happening a lot this kind of learning from mistakes is that everybody then ran off by function front 

1:04:40 

office back office um just developing lots of POCs in lots of different places 

1:04:45 

and and actually the fundamentals just weren't there. They didn't have the infrastructure in place to really roll it out at any scale. Um there was no 

1:04:52 

real governance around it and so everyone was just doing different things. So what what we were just creating which is kind of back to the 

1:04:58 

MIT report which is again no surprises that you know most of the POCs were failing and it no bad things was 

1:05:05 

experimentation people were learning but I think that kind of um a lot of a lot of times I had these conversations and 

1:05:12 

you you know like CEOs will just say right where do we start we know we need to do anything you know you're kind of 

1:05:18 

asking a CEO kind of pick productivity over what you're going to do in the front office or what you're doing with 

1:05:24 

your entire workforce And the issue is that a lot of the larger organisations that we're working on, they've got like 

1:05:29 

um transformation fatigue. They've been going through system implementation after system implementation for years 

1:05:34 

and you know you've had you know kind of cloud and then you had the bizapp layers and now we've got AI transformation. So 

1:05:41 

a part of that is I think to your point is it's just kind of right what are the things that going to really make a difference that you're really going to 

1:05:46 

double down. you know do all the things you need to do with a workforce to bring them up to speed because that's where your innovation is going to come up from 

1:05:52 

come from but you know what are the three four things that will really really you need to focus on that might 

1:05:57 

be a defence just from a defensive standpoint you need to do that and you need to remodel because there's a real threat there but then double down on 

1:06:03 

those and then put that kind of infrastructure behind it. So, so, so big learning there is the just trying to do 

1:06:10 

everything at once and sort of going kind of all in and sort of thousands of use case rest of it. That's where we kind of find where people suddenly start 

1:06:17 

getting kind of fatigue from the technology. It's not that the technology isn't delivering, it's just it's becomes 

1:06:23 

ineffective. Yeah. Hello. Uh, I would like to continue this topic because for me it's 

1:06:29 

indeed very interesting. So, I'm speaking here. Um so if we speak about the successful roll out uh I'm agree 

1:06:36 

about the people um from the technological perspective what is the most three things in the big enterprises 

1:06:43 

should be in place in order to have a success in the production productionalisation of the AI use cases 

1:06:50 

and agentic topic so John so I'll give you my take on that 

1:06:56 

the no regrets thing we're finding right now um is um the infrastructure you need 

1:07:01 

to have in place to sort scale. So if you sort of say as a given we know the world is moving to AI transformation you 

1:07:08 

know the world is moving to agentic um you can argue about where you go first is it sort of back office functions or 

1:07:14 

adoption or front office whatever you do I think kind of getting the infrastructure in place and the kind of 

1:07:20 

layers what what what I mean by that is just when I when I talk about infrastructure and again I I keep saying 

1:07:25 

this is not about data centres and GPUs that's that's a whole different debate but just an organisation saying right 

1:07:32 

this is the way we're going to build and roll out agents. This is the way we're going to interact with different biz apps. Um because remember if you fast 

1:07:39 

forward two three two three years you might be operating hundreds even thousands of agents and then you got to 

1:07:45 

start thinking about what's going to happen in the next few years. Are these agents going to need to be treated as individual employees? Do I need to deal 

1:07:51 

with identity issues? How many tools and data sets I'm going to have to pull from? There's all this kind of architecture stuff that you need to get 

1:07:58 

right. So that to me is kind of like a a no regret and that you should just do and get it right now because otherwise I 

1:08:05 

can tell you in two years and by the way the consultants will be really happy with this is going to be like years of 

1:08:10 

fixing kind of everyone going off in lots of different directions with hundreds of different platforms and then you're replatforming and you're 

1:08:16 

centralising. So I think that would be the first thing I would say that you absolutely just should do and and I 

1:08:22 

think there's the the technology has now moved to a point the infrastructure providers are now kind of solving that problem in a way they probably weren't a 

1:08:29 

while back. I think this the second thing I think then I would sort of say I'm not sure I've got three um points to 

1:08:35 

you but I'll do I'll do with two is then just to work out where are you going to where are you going to double down and 

1:08:40 

some of the infrastructure you need to have in place within the organisation. What I'm finding right now is that if I look at organisational structures right 

1:08:47 

now and different job roles all the way from SUITE and below a lot of the skills that you need don't exist there 

1:08:53 

for this new world. So you know we talk a lot about you know when we get into organisations that what was being asked 

1:08:58 

of a CIO sort of in the past is not what's being asked of a CIO right now. 

1:09:04 

So actually what who do you need to have in place to kind of move into this new 

1:09:09 

world of version two version three. I the final point I would then make is I 

1:09:14 

think there's a reality check I think some organisations need to have and this is a kind of I I constantly one of the things I sort of I lie awake at night 

1:09:21 

time something and thinking about um you know blockbusters Netflix knowing everything we do right now with the 

1:09:26 

power of hindsight if we were then on that board what would we have done differently and and the thing that I 

1:09:31 

always kind of say and you have to recognise as an organisation is what what are all the structural kind of 

1:09:37 

parts of an organisation that are stopping you doing this change and how quickly can you move to the sort of 

1:09:43 

operating model you need to move to. So there are things that you know they're going to happen. Um so there is going to 

1:09:50 

be levels of disruption whether that's death by a thousand cuts or a scale up coming in and disrupting your business or your competition moving faster. So 

1:09:56 

you're going to need that speed. So one of one of the discussions and I guess my advice to you would be there are some organisations where this you'd look at 

1:10:03 

the organisation you just said look as much as we're going to try to get the change change management right as much 

1:10:08 

as we're going to try to get this adoption story right um the organisation is so entrenched in the way they're 

1:10:14 

currently working for them to pivot in a short period of time to this new model to kind of make sure they've got the def 

1:10:20 

defensive mo or create this kind of new market that's going to be quite hard. So are there kind of and we're working with 

1:10:27 

a number of organisations by the way where they're sort of doing a dual track. So they're almost red teaming themselves and setting up organisations 

1:10:33 

that compete with existing organisation and it's quite forward thinking and the view is as the current model loses value 

1:10:40 

the new models then sort of increase value. And it's quite a brave thing to do but actually what that allows them to 

1:10:46 

do is move with some real agility. So it's kind of like almost the counter to a well funded scale up is what's your 

1:10:53 

equivalent of that and and it almost gives you a lot of what you need for your future target operating model at a 

1:10:59 

much more accelerated timeline. Other questions? 

1:11:18 

So hi. Uh this is a little different than all the other subjects we went into. Um 

1:11:25 

right now there is a huge intergenerational intercultural a lot of diversity at 

1:11:32 

workplace with people from all over working on teams together. With all this transformation happening especially 

1:11:38 

looking at design where one has to be mindful um how do we incorporate to see 

1:11:43 

that the real potential that exists in diversity can still be kept into the 

1:11:49 

agents I can 

1:11:55 

yeah I think um so I also talk about research but it's not directly related 

1:12:01 

to culture but think about like people coming from certain cultures are 

1:12:06 

considered as disadvantaged in for example a team right. There are also uh 

1:12:12 

paper sharing for example in like in software companies when they try to 

1:12:17 

build this agent systems that can help programmers write more efficient codes. 

1:12:22 

they find that uh women and less experienced workers are less likely to adopt that because they're afraid of 

1:12:30 

being um being criticised or being uh doubted about their competence. So again 

1:12:37 

I think like also there uh there could be certain mindset about 

1:12:43 

um like whether we want to build a inclusive culture for using this um AI tools then maybe we should set some role 

1:12:51 

models that come from this disadvantaged groups and they this show so to show 

1:12:56 

that this organisations is actually encouraging the adoption of these tools and they're incentivising people for 

1:13:04 

like positive adoption. Yeah, I think that's like one case I can bring up. Yeah. And and can I um can let me give 

1:13:12 

you an example because I think it's a super interesting question. Um so within um PwC UK um we have a really large 

1:13:20 

neurodiverse um group uh community within the UK. So there's about 1,200 people that sort of identify with some 

1:13:27 

form of neuro neurodiversity. One of the things that we sort of found was really great. So with AI tooling and sometimes 

1:13:34 

this kind of level of inclusion disappears because you're sort of building for one demographic. Um 

1:13:39 

and what we kind of found that actually partly what we're doing with the tooling is as we're prompting or kind of documents are coming in that the one of 

1:13:45 

the things we've been working on is a you know allows individuals to set a profile. This you know this is where I 

1:13:51 

am and sort of in sort of you know whatever kind of factor you're looking at. So it creates this kind of a view of 

1:13:57 

an individual. So when I receive information or a document, this is the way I sort of this is the way I sort of 

1:14:04 

compute that information and then when I respond to that, this is the way I usually write about it and here's all 

1:14:09 

the nuances I'm missing because of you know whatever kind of area around sort of neurodiversity that I'm being 

1:14:15 

affected by. And so one of the things we were kind of but to your point when we're doing the design is to say right 

1:14:20 

it's all very well just rolling out all these general productivity tools and sort of then giving them training around you know how to adopt and how to use 

1:14:26 

them day-to-day. doesn't solve that problem that people have today. And so we've kind of inserted these layers and 

1:14:31 

we're experimenting this with right now which is as information comes in the the platform will essentially then just kind 

1:14:37 

of rewrite it to say right this is what is meant in a language that kind of is much much more understandable and easy 

1:14:44 

to consume for the individual. And as they then respond to it what it will then do is convert that to say right if 

1:14:49 

it's going to someone that is um not neurodiverse um how how do you need to kind of present that? So and and and I 

1:14:56 

think using that same kind of approach when we then sort of talk about different cultures as well and the way 

1:15:03 

people will um sort of translate something when it comes through to them it's it's really sort of interesting um 

1:15:10 

you've got to build those dynamics in and so so almost the AI it's kind of almost looking at it almost as the 

1:15:15 

Babelish it's the the leveler in a way is like can we then kind of get to a common language and use AI in that 

1:15:20 

respect and we're seeing that even with a lot of the kind of you know the customer sides of things this this idea that you have um an agent which will 

1:15:28 

then fit every customer just not true and and a lot of the work we've been doing is around um you know the kind of 

1:15:34 

the dialects the sentiments and depending on who you're talking to how does the agent then behave all of these 

1:15:40 

things are being looked at and it's super super important as well otherwise you do allow for who are you actually 

1:15:47 

solving for and do you end up with a whole load of bias and there's a lack of inclusivity in some of the design so I 

1:15:53 

think it's a it's a superant one area. Thank you. 

1:15:59 

And there was a question here in the front. There's some more around as well 

1:16:08 

just thanks. um in a agentic framework what 

1:16:17 

can possibly go wrong if I give my credit card and access to all my data is 

1:16:25 

a question for our community from Tedix Jara we are organising a TED talk about 

1:16:32 

risk and problem with the AI thanks well any of us could take that but uh 

1:16:39 

who wants to start I I have a lot to say I've had one or two experiences myself. 

1:16:45 

Yeah. So I think from research what we could say is like there are two aspects that can go wrong. One is from human 

1:16:52 

lens site. So when you can delegate your task to an agent, you're more likely to 

1:17:00 

ignore the grey area or your own moral responsibility because you're also 

1:17:05 

outsourcing the responsibility to the agentic AI. So um from the human side 

1:17:11 

people are more likely to do to give do unethical behaviour. That's one aspect. 

1:17:17 

Another aspect is compared to uh delegating to a human agent to do your 

1:17:23 

task to execute your like order. Um this agentic AI are more likely than than 

1:17:30 

humans to comply with your order. So when you ask it to do something like in 

1:17:35 

the grey area of the moral zone, it's more likely to comply your order and do something slightly unethical or even 

1:17:43 

immoral. Yeah. So that's what we found. Yeah. So if I was putting my kind of 

1:17:49 

techy hat on then around this, there are there's some obvious design principles 

1:17:54 

that get missed sometimes, which is how you grounding your data and what and how you grounding your agent in what data. 

1:18:00 

So have you kind of kept it too open? and so it can then sort of veer off in different tasks. It's being absolutely 

1:18:05 

crystal clear when you're designing the agent is what is the outcome you're looking for and then what are the 

1:18:10 

guardrails you put in place. know if the out the outcome and and Dez will speak way more eloquently about this than I 

1:18:16 

will but if it's about um getting to a resolution whether that's um insurance company sort of saying right I've got to 

1:18:23 

get to the claim being closed as quickly as possible um and that's the measure 

1:18:28 

then it's just making sure that it doesn't diverge from the task so it's kind of saying well I'll just give everyone whatever payment they want so I 

1:18:35 

I'm being overly simplistic here um to get to that sort of final measure so the kind of the design kind of methodology 

1:18:41 

ology around it and how you're sort of grounding it to make sure you're getting to the right outcome in the right way 

1:18:47 

and the models don't start diverging. The other thing that's really interesting I think people miss out is that because we're in the early stages a 

1:18:54 

lot of the time um with some of these buildouts is they forget about the data and the maintenance and the run 

1:18:59 

challenge. So they can you can build them now and it will work but then you kind of start needing to think about is 

1:19:06 

how you enriching the data train the models the longer the models are then sort of operating thinking about context 

1:19:11 

memory are they self-learning can they learn the wrong things so uh you know 

1:19:17 

not trying to scare the audience here but there are lots and lots of things that can go wrong and some of those I think there's a timing piece as well is 

1:19:23 

then what are the things you need to have in place the infrastructure the data replenishment and the guard rails 

1:19:29 

around it to to to ensure that it doesn't veer off what the agent's supposed to do. 

1:19:35 

I just come out from a different angle. I think that's pretty good uh discussion in guardrails. The other thing I think 

1:19:41 

that will happen that we're still we're still trying to work out what's going what the nature of knowledge is in a post AAAI world. And 

1:19:47 

all I really mean by that is like when I was growing up, you used to learn things off by heart like phone numbers, 

1:19:53 

addresses, things like that. And then over time we learned to outsource that to technology. And then he we used to 

1:19:58 

like have to know facts for like for a university and then we learned to outsource that to Google. And I think 

1:20:04 

like there's this in educational theory is this thing called Bloom's taxonomy of educational objectives which starts off 

1:20:09 

with recall at one end which is just knowing things off by heart and ends all the way with synthesis the ability to 

1:20:14 

create new information right and I think you know Google and the proliferation of 

1:20:20 

consumer tech sort of said hey we we don't bother learning things off by heart too much anymore maybe times tables for basic arithmetic or whatever. 

1:20:27 

I think we're now actually in the middle of outsourcing thinking to AI and in 

1:20:32 

doing so like in in the same point like how you get there matters. So the AI might conclude something for you but why 

1:20:39 

did it conclude that in what way and would you have come to that conclusion yourself and is the active thinking not 

1:20:44 

itself a muscle that you should be working? I think we haven't really worked out what the the the downside of 

1:20:51 

all of this uh how you say just outsourcing of like important human cognitive function uh to machines is 

1:20:58 

going to be. But I I see people I've worked with people who like literally feel lost when they're not in front of 

1:21:04 

chat GPT cause I can see them trying to think about things but they've almost forgotten how to like do that like and 

1:21:10 

uh so I I I do think there's like there's a behavioural change. We don't we don't know what what what's worth 

1:21:16 

knowing right now, but I suspect it's more than we think and I suspect we're probably offloading too much to the AI. 

1:21:21 

So, let's see how that plays out. They say earlier this year, my team was spending some time looking at companies 

1:21:28 

rollouts and the experiments, right? And there was a bit of a kerfuffle in the news. Probably some people heard when 

1:21:34 

companies were starting to talk about whether to treat agents as employees, right? Or whether to give them that identity piece that you brought up too. 

1:21:41 

And I think for me an interesting moment was when someone raised, well the difference is that if an employee made a 

1:21:47 

mistake, it was at employee speed, but we're going to start to be able in customer service to make mistakes at machine speed. And if you can't figure 

1:21:53 

out which agent is going off the rails or needs the additional training or help, we're going to be in a world of 

1:21:59 

hurt. And so beginning to think about again back to design principles and so many other things. It's just a very 

1:22:05 

interesting time to be alive and doing this work it would seem. Um, anyone else with another question? We've got a couple of minutes left. 

1:22:11 

Sorry to the front over here. Yeah. Um, so I'm asking this question because I'm interested to know where you 

1:22:18 

see the low hanging fruit and the opportunities more in a macro sense. Um, so I'll put you in a bit of a 

1:22:24 

scenario. Any of you can answer this one, but um if you were made CEO of a startup that has a load of AI 

1:22:30 

developers, developers, software developers in and you were given the objective of making the most positive 

1:22:37 

disruption in a commercially viable way in the shortest amount of time, what 

1:22:42 

would you make your business about? 

1:22:47 

So positive disruption meaning just like a good thing to society. I suppose that's subjective to Edward, but to yeah 

1:22:54 

positive to society. I'd probably go after the deployment of AI and education. 

1:23:00 

Um I don't know what that means in the near term, but maybe I'd try and build a way like the vast majority maybe 80 90% 

1:23:06 

of the planet does not have access to primary, secondary or third level education. And I think um there there 

1:23:13 

should be because of AI a way to provide mastery level tutorship to these people. 

1:23:20 

and I think really increase social mobility, increase equity in society. And I think so I think that's what I'd 

1:23:26 

go after. If not that, perhaps medicine with the exact same argument. Yeah. Yeah. I I I' I'd agree. I think you've 

1:23:34 

always got to find the um what is the purpose? What's the outcome you need to go for? So is that about climate change 

1:23:41 

to your point about education? Is it about inclusivity? um because there's lots of problems that 

1:23:47 

aren't being solved right now with AI because they're not being focused on because there's kind of when when you talk about learning there's lots of 

1:23:52 

giant addressable markets where people can get sort of you know multi-billion kind of valuations or estimate that 

1:23:58 

that's and that's where the kind of the focus is going the big tech companies are are in a kind of they're in a race 

1:24:04 

with one another um so so I think if you're in this sort of scenario this sort of wonderful scenario where some 

1:24:10 

philanthropist billionaire came along and said right we're going to write a cheque for a billion I' I'd pick a really kind of meaningful kind of purposeful um 

1:24:18 

pursuit and say right could you get the engineers where that whether that's kind of medical research whether that's kind 

1:24:23 

of taking even some small component of climate change to say is there a way we can design XY Z or is there a way we can 

1:24:30 

start you know kind of urban city planning for the kind of the disaffected I was kind of hearing a conversation 

1:24:35 

this morning and it's um what really struck me and it was a bit humbling was that um I I think we we kind of see all 

1:24:43 

of our kind of issues through a pretty first world lens. We talk about it as you know sitting in the west um where we 

1:24:50 

all have access to technology and I know we've got issues at the moment and there's a level of polarisation but I think the point that was made this 

1:24:56 

morning was that actually there's you know two three billion people in the world with just zero access to technology right now and there are some 

1:25:03 

real problems to solve out there. So that that's I think where where I would then focus and then get that kind of 

1:25:10 

brain power those engineers to really sort of work out because I think a lot of the solutions you'll find are about connecting different corpuses of data to 

1:25:16 

kind of solve problems whether that's climate change or you know kind of poverty or whatever 

1:25:22 

but when you say education it's not just teenagers right it's probably also adults in terms of personalised 

1:25:28 

literacy right I think that's also like a low hanging fruit here yeah another 

1:25:33 

thing I would like to mention is probably AI and companionship and like personal relationships. That's something 

1:25:39 

like people would would not expect when ChatGPT was just launched but it's now the 

1:25:45 

top one use case in ChatGPT. So I think there are a lot of like market demand 

1:25:51 

that we just didn't expect but emerging from this domain. Yeah. But I think I'm not sure it's good thing or bad thing is 

1:25:58 

that substituting human relationships but I think that's one of the top use cases now. Yeah. And with that, we are 

1:26:05 

out of time. Thank you all so much for being here and thank you to our delightful panel. Really, it's been a pleasure. 

Autonomous AI is progressing at pace. We explore the rise of agentic systems and the opportunities they present for rapid reinvention. Their effectiveness, however, hinges on strong infrastructure, trusted AI governance and people who can harness AI, to do more, better, faster. 

Key questions to be explored: Why is now the time to act? How can AI compel reinvention? How can people rewire their mindset to adapt? How will governance help with the accelerating AI Agentic roll-out and the evolution of your business and workforce? As AI reshapes the way businesses' operate, what strategic investments in future ready skills are needed?

Meet the panellists:

Bivek Sharma

Bivek Sharma, Chief Technology and AI Officer, PwC Middle East 

Mengchen Dong

Mengchen Dong, Research Scientist, Max Planck Institute for Human Development

Des Traynor

Des Traynor, Co-founder, Inetercom 

Jennifer Strong

Jennifer Strong, Host and Executive Producer, SHIFT Podcast 

Explore our services

Scale AI for your business

Next Tech Agenda

Insights from Bivek Sharma and more