MANOJ SAXENA:
00:00:03:03 Welcome to this panel. Actually, we decided as a group, the first thing we're going to do is change the name of the panel. And so, the topic we're going to focus on is — ‘Will AI Kill Us?’ And should we be governing it, and if so, when and where? So, we're going to have a very interesting set of perspectives from folks who cover a range of industries and a range of disciplines.
00:00:30:08 I'm going to ask the panel to introduce themselves first. So, Raj, you want to go first? Yeah.
RAJEEV RONANKI:
00:00:33:20 Raj Ronanki, CEO of Lyric.ai. We use AI to make sure that the payments between providers and payers are accurate, which is about as boring as AI governance and alignment, and all these things. So, we'll also make that sexy. Trust me.
NATASHA ALLEN:
00:00:49:15 The most boring of it all -- I'm an attorney, so you can't get any more boring than that. But I co-lead the AI initiative at Foley and Lardner, full-service law firm, and happy to chat to anybody after, if you have questions.
EVA NAHARI:
00:01:00:18 I'm Eva Nahari. I'm a chief product officer at a startup here in Silicon Valley, trying to help enterprises not shooting themselves in the foot, building an agentic platform that makes it safe. So, we fix hallucinations and other things. But before that, I, also have done AI in 1998. I implemented an agentic system back in the day before it was cool.
00:01:29:17 So I have some personal experience with it.
JENN KOSAR:
00:01:34:06 Hi, everybody. I'm Jenn Kosar. The only thing maybe less popular than a lawyer is an auditor, so I just--
EVA NAHARI:
00:01:42:07 We love auditors.
JENN KOSAR:
00:01:44:40 Auditor. So, I'm from PwC, 25 years as a CPA, but now I lead our assurance AI business, which means I help clients with their strategies around AI, building trust and confidence in AI systems. But I also think about how to audit AI, whether it shows up in compliance, financial work, however it may fit.
00:02:01:22 And then I also am responsible for governance of AI at PwC. So, you're sensing a theme here.
MANOJ SAXENA:
00:02:07:19 Excellent. Well, welcome again. I am Manoj Saxena. I'm the CEO of my fifth startup, so I'm a certified masochist. And in the -- and I've also, nine years ago, I started a nonprofit called the Responsible AI Institute. At that time, I was running IBM Watson as a first General Manager. So, I've collected a lot of scar tissue, around the hype -- the promises and the perils of AI.
00:02:32:07 And I'm back at it again. Trust wise, just one line. We build the HR system for agents. How do you onboard? How do you drug test? How do you do performance appraisals of agents? Or I'm beginning to think more and more we build prison systems, because these things are not going to act like proper employees.
00:02:51:21 And I'm just going to set a little bit of framing here. Then we'll ask the panel a few questions. So, as I've gotten into this, as I’ve started understanding the nature of these systems, one of the things that's come out very clearly is the level of autonomy of these systems is taking off.
00:03:09:03 So as of now, as of last week, you had systems like Claude that could run for almost 20 hours nonstop and do a task. And the projections are by the end of next year, you will have over 40 -- typical agent could work for over 40 hours. So, in essence, what you are seeing is the emergence of a second workforce.
00:03:27:18 You're going to have a digital colleague that's going to show up. The problem is it's not just a colleague. It's an out-of-control unknown colleague that could hallucinate, could leak sensitive data. And recently we are finding it could blackmail you, it could coerce you, and it could come up with a language that's not English and talk to each other.
00:03:46:13 So these are all these emergent behavior patterns that have really caught my attention over the last six months to say, “AI is not a security problem. AI is a safety problem.” We are launching this new entity that has its own emergent behaviors. I like to visualize it with -- despite all the promises, we'll talk about the promises, but most of it this session is around controlling this.
00:04:10:16 In my mind, it is thousands of Chuckys with a knife in one hand and a credit card in other hand. That's what we are launching. People think we are launching squirrels, but we are actually launching Chuckys with knives and credit cards. In your company, it could bust up your cloud bills, bust up your GPUs, could do a lot of damage, and can start, like I said, emerging into new behaviors.
00:04:32:15 So the way we have framed this panel is to ask the question around — how do we make sure that these entities align with our intention as the owner or the teammate, or the creator, whatever you want to call it? So how do you align AI so that it meets the goals and the intentions of a business or of an individual?
00:04:53:17 And in order to get that alignment, what is the level of control that we should be designing so we can trust these systems at scale, because we typically -- I mean, I truly believe we are at a point where all hell is going to break loose in the next two years. I was not like this six months ago. But now when I’ve started reading papers and seeing all models behaving this way, there is not enough focus given on emergent behaviors of these systems and control of the system.
00:05:18:07 So we're going to talk about how do we, in a meaningful and beneficial way, put these Chuckys with the knife and credit card instead of that an ice cream cone and a calculator on the other hand, probably. So, Rajeev, why don't you get us going with your view on how do you think large companies, or businesses in general, should start putting this to work?
RAJEEV RONANKI:
00:05:40:16 Well, first of all, we all owe you a huge thanks, Manoj, for bringing AI into the enterprise. So, I don't know if you guys remember all the Watson ads that used to run relentlessly on television. Right? So –
MANOJ SAXENA:
00:05:50:07 Except for the Bob Dylan ads.
RAJEEV RONANKI:
00:05:51:13 Except for the Bob Dylan ads. But when you said time to put Watson to work, you're the marketing guy behind it. And, of course, some things works and some things didn't. But the point is, it started, I think, the real first point in time where we really thought about using AI in the enterprise.
00:06:10:21 And I was at Deloitte at that time. I saw the genius behind what Manoj was doing. I said we should go create a practice, and you can ask all the partners at Deloitte how much have their practices today. I'm sure that my colleague here from a slightly less competitive firm, PwC, would attest to that. So, here's the frame.
00:06:29:18 So I think Manoj is absolutely right. We can’t call these things governance alignment. It sounds like the picture that comes to mind is large groups of people sitting here in a conference room saying, “Okay, check the box, check the box, check the box. Do we go live?” I've lived this. So, incidentally, from my previous company -- it was called Elevance Health, also known as Anthem Blue Cross Blue Shield.
00:06:52:02 I was brought into make Anthem Blue Cross Blue Shield and AI-first company, I was -- when they first asked me that, I was like, “Did I hear that correctly?” So, and it turns out. They gave me this office. They gave me this office. Can you hear me better? Great. They gave me this office that was incidentally, right next to the Head of the Internal Audit department.
00:07:13:23 Imagine the irony. Head of AI, head of internal audit — photo negative photo (Unintel Phrase __ 07:19). Right. So, the minute I released an AI, it come after me and say, “Shut it down.” And so, it became a constant cat-and-mouse game of how do we actually design systems where the trust can accumulate and compound faster than the risk aggregates. That's the equation.
00:07:36:12 If we can solve it, then we can scale it. If we can -- if we keep calling it alignment governance, it's going to sound like — when the after-party here starts and there's passing out the cocktails, Imagine, if someone comes around with a plate of broccoli, are you going to eat it? No, that's what governance sounds like, right?
00:07:54:20 But instead, if you say, “Hey, if you eat this broccoli, you're not going to have a hangover the next morning,” are you going to try it? You might. So, the mindset needs to be not making governance, safety, trust, speed, all competing and opposing things. You have to make it synergistic, harmonious, and use it as conditions for scaling. And the examples are all around us.
00:08:16:04 We'll come back to it. But the framing is very important for this.
MANOJ SAXENA:
00:08:20:22 Great. Natasha, anything to add?
NATASHA ALLEN:
00:08:24:05 Not to that. I think, on the legal side, it's a little difficult because we move a little slow when we are doing legislation. Just to lay the land a little bit. You started with the prior administration talking about safe and secure AI. New administration is talking about innovation. And just innovation, right. Being at the forefront, you have 50 states that have implemented AI legislation at various stages.
00:08:50:07 So it's mixed match, right? There's nothing that you can kind of pinpoint your finger on in terms of how are we going to legislate this. Now, you add on top of that agentic AI.
MANOJ SAXENA:
00:08:59:00 Yeah.
NATASHA ALLEN:
00:08:59:50 Right? So, you have this whole system that is just trying to catch up. And I think, around the world, I think the governments, the world organizations, are really trying to get their arms around this.
00:09:09:01 I know the UN has created kind of a thinktank, right, where companies can come together and talk about or countries and talk about how are we going to govern this. China is trying to put in place the first worldwide AI governance legislation. So, I think it's very difficult. And there are just different things and tweaks that we can put in place in terms of just assisting regulation right now in order to deal with agentic AI.
MANOJ SAXENA:
00:09:35:02 Let me just build on that, because one of the reasons I came back to do a fifth startup was not because I was getting bored. I was happily teaching somewhere. But one of the things that I saw, and I think goes to the comment you just made, everyone's thinking -- I was thinking two years ago when I started this, the bigger and bigger models.
00:09:52:12 And it seemed to me like you're building this giant nuclear course, but no one's putting a dome on top of it. Right? And if you look at the analogies, the last thing that something like this happened was in the automotive industry. Safety was not thought of till 40 years ago. People that wore safety put a bumper in the back and a bumper in the front
00:10:11:16 and then I'm before the paint job, and I've got safety. Right? But in that, hundreds of thousands of people died over decades. Crash zones, airbag, seatbelts were built into it. It seems like with AI we are in the same spot, except instead of five decades, we maybe have three years or five years because of agentic AI.
00:10:30:07 What role do you think your function can and should be playing? Because I think looking to regulations is one piece, but I think companies need to regulate themselves more. Do you have any thoughts on how your function can help companies sort of exploit this properly?
NATASHA ALLEN:
00:10:44:21 Yeah, absolutely. I think, you're seeing a lot of -- so legislation was very static. Right. It was the end result. We're going to regulate that, see what happens. I think now we're in an age where you have to be monitoring along the way. Right. So, in terms of what can you do with these technologies put in place systems that actually monitor what the agentic AI is doing along the way.
00:11:06:05 And that can include anything from monitoring, right. As soon as something happens, have an audit trail, so something to look back on. Right? I think you're always going to -- you're going to talk to a lawyer if they go to contract, right. It's always allocation of risk. Right. Take a good look at what are your indemnification provisions, what are you doing with regards to transparency, what -- those provisions are things that will probably be more focused on.
00:11:28:19 We're going to talk about liability later, but I think even more underscoring what are you doing in terms of allocation of risk is very important. I think thinking of what are the policies that should be put in place so that you have a kill switch.
MANOJ SAXENA:
00:11:41:15 And should that be done by humans, or should that be done by agents? Should we be using AI to control AI?
NATASHA ALLEN:
00:11:47:16 I think it is a combination. I think you always and you still need the human oversight, right? These reports may come, but you can't have any AI agent say, “Oh, I know this is your company.”
00:11:57:14 But I think you do still have to have human oversight.
MANOJ SAXENA:
00:11:59:19 Yeah, I was trying to set this up for Eva, so there you go.
EVA NAHARI:
00:12:03:06 I can't wait anymore.
MANOJ SAXENA:
00:12:04:05 I'm doing my job now. Okay.
EVA NAHARI:
00:12:07:16 No, but you're right to a large extent. In parallel, we can also look at building these controls in. Think about the car analogy that you brought up. Like, if humans were recommended to put a seatbelt in, but all of them do it if -- even if it was, like, regulated and controlled, probably not. So, let's build it in and make it easier for companies and users that aren't as deep into AI as other people to really do it right.
00:12:41:21 And I think I have a good story, if I may take a little air time, to help everyone understand the shortcomings of AI. I see it as a maybe a four or a five-year-old, like the amygdala is not really developed and very impulsive, and can come up with really great ideas. Right? But you need a parent. And I'll get back to that.
00:13:04:14 So a good friend of mine, a professor at Berkeley, he shared this story with me, and I use it all the time. It captures everything, so. I live in California. I have a house. There’re termites. Right? I want to take care of the termite problem. I use an agentic service. I tell it — I wanted to take care of my termite problem, and I want to do it cheap and fast.
00:13:30:10 I let the agent -- multi-step agent run with the work. And I go to my office, do my job, come home, and my house is burned down. But the agent did exactly what I told it to. Like, agentic AI doesn't know the full context we humans do. They are not brought up by parents telling us what's right or wrong.
00:13:56:02 And yes, you can train a model on a lot of data, but common sense and the full context of all the things you should not do is really hard to incorporate in that data that you train a model. So, it is not, in my humble opinion, which might not be so humble, true intelligence yet. It is artificial intelligence.
00:14:20:02 And that's why we are so “would buy it” because it acts and look like a human, but it's really a parrot or a four-year-old with no amygdala control. Right. So, we need this guardian agents. We need controls in a different way. Why? Because we leave that multistep agentic workflow to run on its own, we need to be there when it goes off the rails.
00:14:42:09 And sometimes humans can be in the loop. But if we’re really going to unlock the potential for all the workflows we can actually automate, we need to have guardian agents course directing. And to do that right, we also need to capture the intent. Is the intent to burn down the house? No. The intent is to get rid of the termites.
00:15:05:13 Okay, but at all costs? No, probably not. We need some kind of wisdom layer. Some refer to some semantic layer. It's come back popular. We need the intent logging. And that's also good for audit. And they'll pass the ball soon. It's -- we don't longer just need the — “Who did what and when”, we also need the “Why?”, not only to make sure we stay on course but also see what happened after the fact when something disastrous happens.
00:15:42:17 And the termite story may be a fictional one. But if you look at the news just a couple of months back, there are real examples of companies whose agentic workflows went and erased the whole production database because that fixed the problem. Right. See my point here? Yeah, we need to guide it. We need to have guardian agents.
00:16:07:16 We need to have intent logging. And we need to ground these agents in data that we control. That's my message.
MANOJ SAXENA:
00:16:14:14 And that's wonderfully said. So, Jenn, that the term “assurance”, I think, in your title is a really pertinent one here. So how do you see this and how does PwC see this, the alignment control problem or assurance problem?
JENN KOSAR:
00:16:26:21 Sure. I mean, the problem that’s going last is I have reactions to everything that was said, so do we have time?
MANOJ SAXENA:
00:16:31:19 Take your time. Right. Go right ahead.
JENN KOSAR:
00:16:33:04 I'll start with what you asked me. So -- I mean, first off, the punchline of what I'm about to say is that you can't use all techniques to solve new problems. I know we’ve talked about that earlier. And the idea of assurance and audit, and some of the things you would all expect me to say, is in some respects an old technique.
00:16:51:22 It's an idea that auditors, whether they're internal, whether they're regulators, whether they're people like me and an external organization, come along, and to some extent look at something after the fact. Look, because there was a problem, because it's a requirement. Because, by the way, good organizations believe in it and think it's the right thing to do as well.
00:17:09:17 But it's expensive. It's after the fact. It may unearth recommendations and things to improve. And that's a positive that comes out of it. But for what we're talking about, for all the things you guys have mentioned — and I know, I see in my job right now — it's not going to work. That's not really going to help, with the confidence we need to build, with the systems we need to create, with the things we need to avoid and prevent, of course.
00:17:31:11 And so what we talk about is the new forms of assurance that need to exist and the things that I work on, by the way, they're not brand new. These concepts do exist in other emerging technologies and systems. They're just still not quite fit for purpose. Really require concepts that have been around like continuous auditing, continuous monitoring, trusted design.
00:17:50:22 These are things that have floated around. They're, again, not brand new, but it's kind of time to get serious about how they work, incentivizing the right organizations to do so. What I find in my travels, in surveys that we do, is that the largest most mature organizations, yes, perhaps the most regulated — again, incentives might vary —
00:18:12:03 have the resources, the capacity, the motivations to build the -- I really hate the word governance lately. Sorry. That’s what I wanted to change. It's a dirty word lately — have to build these mechanisms and in fact are motivated to do so. We find that they will say that they see return from that. They see better ROI, they see better customer engagement or satisfaction, they see better outcomes.
00:18:38:01 And it not just risk mitigation. It is in their interest to do the things that we are talking about. And so that's one of the many reasons that they do it. The challenge is that the smaller organizations, the startups, the AI natives that we all talk about, either don't have the resources, don't have the motivation, aren't required —
00:18:55:09 right, again, all of those things I just talked about, they're building some of the, yes, more interesting, exciting, and perhaps at times scarier, and may not have the alignment of all those things I just mentioned, and also may not be required to have anybody looking over their shoulder to address everything we've said.
00:19:14:20 So back to the question of assurance, what we believe in, what we've talked about it publicly — and look for us out there, you can look for me out there talking about it is the idea that we can apply assurance. We can apply the concept of a third party coming in and checking voluntarily in a more proactive way,
00:19:30:23 right, not once a year, not once every five years, to tell you, as an organization, no matter how big or small you are, and then allow you to share that story with the world, to say like — this is what I've done. This is what I've built. This is the proactive techniques that I'm using for these systems.
00:19:46:19 And this is how they work. It's that transparency. I mean, it's a little bit of bragging rights too, to say — I've done this right. I've done it right from the start. And here's PwC or others -- I'm not the -- we're not the only game in town, because we work with our own kinds of regulators to make sure everyone can do it,
00:20:04:04 the right kinds of organization. So that's where we believe it needs to head. But again, everybody I talk to, I actually think is along for the ride a lot more than you might think. Maybe not as many Chuckys out there.
MANOJ SAXENA:
00:20:16:70 Absolutely.
NATASHA ALLEN:
00:20:15:17 I think you touched on a good point, right. There's no incentive or requirement to do anything. So right now, everyone just has to act very altruistic. Right. So, therein lies kind of the difference in terms of those who will engage. And maybe like you said, you can't afford to do it, right. It's just something that just isn't in the budget.
00:20:33:02 But I think that's where you get this weird kind of dynamic.
RAJEEV RONANKI:
00:20:35:20 Yeah.
MANOJ SAXENA:
00:20:37:80 It's a -- do you want to add something?
RAJEEV RONANKI:
00:20:38:14 Yeah, actually, I want to string some things together here, because I think they’re hearing slightly actually very diverging views or opinions. But we're all California-polite here, right? So, we're all nodding and agreeing as if we are all saying the same thing. We're really not. Just to be controversial. All right. So, I'm going to project something onto Natasha that she probably did not intend, but just for sakes, just project it.
00:20:59:09 Okay. Right? So, think of what Natasha is saying as — let's create a criminal justice system for AI. Well, I know. That's what I'm saying, “project.” Policies, governance humans in the loop, right? Correct? So, to a certain extent. What Eva said was — “Yeah, well, you know what? You need to think of this as a sort of an ill-formed intelligence, the technical word she used, I can't remember, but guardian agents kind of stood out.
00:21:23:16 Right. So, she's saying, sort of theoretically, AI needs to govern AI with sort of humans double-checking the math. And Jenn is like, “I don't like these terms, assurance and all of that, because my consulting friends in PwC are like, “Yeah, assurance. You know, I think that — What is assurance?” Yeah. So, honestly, that's what Deloitte says.
00:21:42:03 I mean, I was a consulting partner. I couldn't wait to get rid of my audit practice because then all my independence things went away and I could sell – yeah, so. And I love fully. And you guys are great (Unintel Phrase ___21:54) I think we’re the only law firm that's kind of at the table at a strategic AI conference.
00:21:57:10 But let's not kid ourselves, right? We are designing a form of intelligence, and the terms we're using are about a technology. It's not a technology. It's a form of intelligence. We've known that since the ‘40s, since World War II. Minsky and everyone's has had this quest of artificial intelligence, neurons. Why do we call these things
00:22:16:15 what we do is just reflecting off the way that human brains design. So, then we, of course, need a criminal justice system because we have to have rules, but that's not nearly enough to govern, control this intelligence. So therefore, what do you do in case something goes wrong? Do you lock it up? Lock up Chucky, in your perfect example?
00:22:37:11 Where do you lock them up? I mean, if you do that, all you're saying is slow things down. Well, if you slow things down, we're in a global race. Is China going to be faster? Is India going to be faster, or is Europe going to be faster? And that matters. This race, the winner of this race at a country level means jobs, economic growth, and all of those factors.
00:22:58:03 So slowing it down isn't to our benefit. So, the only option really is, well, how do you design it such that the trust and the risk aren't opposing each other. We have to assume risk will exist and we have to make appropriate decisions. Even take -- perhaps you brought up the automotive industry, right? So, we're right here in
00:23:19:16 heart of Silicon Valley is San Francisco. Waymo is running around, Tesla is running around. Waymos don’t go on the highway but Teslas do. You can use autopilot on the highway. Right. Two very different approaches. One is very deterministic with heavy equipment on cars with LiDARs. And Tesla is trying to solve it
00:23:37:02 and various sort of software AI cameras manner. One is more scalable, one is more safe, arguably. But which is better? We don't know. So, all these debates are happening in the world of AI without enough thought to — Which approach should we take? What's going to be scalable? What's going to happen three years from now? I'll tell you what.
00:23:55:09 We're headed to a bust cycle because no one is really thinking about it properly today, which is why we need to go out there and educate everyone on this. So even though it sounds like we're saying eat your broccoli, eat a vegetable, that's not it. So, we have to make this topic sexy that people listen to it, and say, “How do you want to scale AI?” That’s the question.
MANOJ SAXENA:
00:24:13:17 The reality is though, if you look back at our progress on technology side, the two major drivers are either enforcements or disasters. That's truly is what brought things forward. Either you enforce something or you have a miniature novel on your hand and you start fixing things. And clearly enforcement is stepping back because all governments are saying, “Hey, you know what?
00:24:34:09 You jump out of the plane and you stitch your own parachute as you're falling down.” That's up to you. That's where the governments are saying — Go deploy. Our government is – yeah. And EU is kind of backing off also as a result of our government. So yeah. So, if you look at what's going on there and then you go back in the history and look at how did big technologies come about, and it goes back to the comments you all were making around assurance and control and guardian agents.
00:24:58:16 Automobiles by themselves were not the real force multiplier. The public infrastructure that was put underneath it was the real force multiplier. The highway systems is what really made the impact of automobiles. And my submission to you is agents by themselves are like automobile. Where is the trusted infrastructure? Where is the infrastructure on which these agents can run, talk to each other?
00:25:22:20 Some of the vision that you have about healthcare cost to be removed from payers to providers, because today you can build the agent, but they are Chuckys and they will go off the roads, because there are no highways to go on. What's your point of view on — a, do you believe the infrastructure should be there? And b, what role should the government have in it? How will we go about building the infrastructure?
RAJEEV RONANKI:
00:25:42:13 I think the government needs to have a minimalist role in this. Quite honestly, once you introduce bureaucracy into the mix, then bureaucracy ensues from that. Right? So not to minimize the role of governments, they need to sort of frame it and say — this is what we want out of AI. We want economic growth.
00:25:57:06 We want safety. We want job creation. We want innovation. All right. So, within those things, each industry needs to apply a safety OS. All right. So, in healthcare and clinical and administrative, very different things. Clinical, you can't afford to make a mistake. Administratively, I mean, we make tons of mistakes today. We have, whatever, $1 trillion of waste.
00:26:17:00 So how bad can it be if you put AI into the mix? Is it going to make it $1.2 trillion? I don't think so. We maybe we'll save $100 million. So go faster even if it's not entirely safe. But when you're practicing medicine in a hospital and you’re treating your patients, better make sure that that's safe.
00:26:33:03 Right? Because you don't want to risk anyone's life. You don't want to risk any long-term health implications. Same thing. So, I think we can be sensible, fast, but be very—
MANOJ SAXENA:
00:26:42:40 But no government?
RAJEEV RONANKI:
00:26:43:05 Minimal government.
MANOJ SAXENA:
00:26:44:20 Minimal government. Okay. Eva?
EVA NAHARI:
00:26:44:07 Yeah, I think we are – they’re more -- it's more complex than that, like, assigning a certain entity. I think it's all of us, right? All of us need to take our part of it. Like, young startups need to build it in responsibly. Organizations I talked to, like S&P Global, Allstate, Travelers, like all these regulated industries,
00:27:09:15 NBC, like no industry excluded. They are taking their own responsibility in addition to existing regulations because they know regulations will come. But before that, before it's clear, they are taking their own responsibility, building out their own internal AI policies that they committed to following. They anticipate that the regulations and governments will get there eventually maybe, but it's better safe than sorry.
00:27:37:01 So they also step up the responsible companies that we talked to. They care about trust. Maybe it's biased. Maybe that's why we could talk to them. But these companies at least have AI policy. They have an AI-governed entity inside like an AI council. They have started to formulate clearer AI strategies. Many of them lack that still. But I mentioned some names that are on the forefront thinking about this.
00:28:05:16 And then they implement that as a guideline what -- who to partner with, what technologies to adopt and how. But it has to come from some thinking before acting, but it doesn't mean stopping.
NATASHA ALLEN:
00:28:20:01 So I'm not from New York, but I'm Canadian, so I'm nasty nice. What I would say is, when you're thinking of legislation, government, laws, whatever you want to call it, I think we all agree on certain tenants should be protected. Right? So, that is what I think should be employed when we're looking at agentic AI. Right? Not slowing down innovation, but going back to those tenants — don't kill.
00:28:45:16 Probably a good thing, right? But you can build in a kill switch for that. So, when I'm talking about legislation, it's kind of harnessing and kind of encompassing that those basic tenants that we've all the green on forever before or after AI to make sure that they're incorporated in the agentic AI. Not to slow down, because I agree with you, we are in a race, right?
00:29:04:16 And we're already behind. But it's just having those basics. And just going back. I think you touched on this, too. If you don't have somebody pushing you to do something that's right, unless there's a disaster, you're pushed to do it. Or like you said, the regulators who are in specialized industries realize that there could be a disaster because it is highly regulated.
00:29:25:07 Right. If you're in healthcare, finance, whatever it may be, they are recognizing — we can't wait for the government. We need to put our own thing in place. And I think that's what the states are doing as well. They waited. They hoped, they prayed. It didn't happen, so they're moving on.
JENN KOSAR:
00:29:38:05 Yeah, I completely agree. I think there's a distinction to be made between like social imperatives and business imperatives. There's some gray there. And I would call, like, the financial services industry, where things are a little bit of both. But so, I would absolutely think there's a role for government and quasi-government institutions to protect social interests — things we as a society have agreed are critical, important, non-negotiable.
00:29:58:09 Protecting our children, protecting safe -- true safety. Right. I think there is a role to play and not ready to quite give up on that yet. But I think the business imperatives. I have seen success in this in other emerging technologies or emerging technology risk issues over my career, whether it be privacy or confidentiality of data, or even just how customers are treated in business transactions.
00:30:23:21 And what I've seen – again, right or wrong or different, it's just what is happened is that the largest organizations on a sector basis -- again, you can debate whether sectorizing this is the right answer, but it's just what's happened and it seems to have worked. We'll come together on their own and decide what best practices look like, what works together sometimes in a fairly non-competitive way.
00:30:44:03 Will they say, you know what, like this isn't what we want to compete on. We don't want it to be the customers have to decide whether their data is protected or not by going to institution A or B. We know that nobody benefits when there's major cyber breaches on all the banks. That's not a good idea. Right?
00:30:59:09 So it's things like that where that's why I said that sometimes there's gray zone between social and business, but when the businesses are best placed to solve it, I think they recognize that they come together, set the standards, set the minimum expectations. And sometimes they have to work together to solve it. Sometimes they don't. They just go away and agree to do it.
00:31:16:16 And so I feel like that's actually where this is going to have to come from. And I -- like I said, I'm optimistic based on my conversations, anecdotal and then real data that we have that they seem motivated to do that for all the right reasons.
MANOJ SAXENA:
00:31:32:03 Let me ask you this. I'm going to shift the topic a little bit about excitement versus reality. So, what I'm hearing is the potential. All the panelists talked about it. And now we've talked about some real issues. Just going through, show of hands. How many of you believe in the next 18 months there's a reckoning coming, there is a trough of disillusionment where the agentic is going to hit the wall because of a lack of these issues?
00:31:53:09 How many of them -- how many of you believe on a broad basis that the fraud is going to settle down and then the Gartner Hype Cycle curve is going to come in? Do you think we are still ascending the peak, or are we about to hit the trough?
JENN KOSAR:
00:32:06:05 Are you limited to agentic intentionally or just AI in general?
MANOJ SAXENA:
00:32:09:17 Autonomous systems. Autonomous systems in general. Yeah.
EVA NAHARI:
00:32:14:21 I think, there was some other panelist earlier who referred to the MIT report that we all know, like 95% the projects fail.
MANOJ SAXENA:
00:32:23:15 Beyond that. Yeah.
EVA NAHARI:
00:32:24:13 I think that's kind of a sign that it's not like a trough of explosion that's going to happen, but it doesn't really move along to autonomous workflow. I think that's what I see here.
RAJEEV RONANKI:
00:32:36:17 Okay. Can I answer it differently though? I think there are more Pets.com in AI today than there are Amazons. So that's a given. Right. So that will get fleshed out.
MANOJ SAXENA:
00:32:44:09 It’s a bigger market too.
RAJEEV RONANKI:
00:32:45:04 Yeah, that's a bigger market, but who cares. But that's just sort of a nature of how VCs invest money. Love them. They invest in a bunch of stuff. Nine out of 10 don't work. One works. That makes them happy. So that's what it is, so let's just accept it for it is. I think what we're missing, Manoj, is that we're missing a Wall Street education angle.
00:33:02:18 And you were on the Federal Reserve Board for a while. So, I think Wall Street obviously just kind of looks at lagging indicators — revenue, EBITDA, growth, but those are all outcomes of whatever the companies are doing. That's right. And they measure it on a quarterly basis. And so really that's like an outdated concept. So, then you create Fortune 100 lists on that and whatever.
00:33:24:05 Meanwhile, this whole innovation is happening in the world of AI. Right? So, think what happened to 2007 when the iPhone first debuted? Well, of course, Apple still is the most -- one of the most valuable companies in the world. But forgotten in that history is that AT&T was a fourth-place incumbent that had an also-ran network. It was called Cingular or Singularity or something like that.
00:33:46:09 But they had an exclusive deal with the iPhone, with Apple, and then became a major competitor to Verizon. And now they're still there. Still to this day they're irrelevant. So, think of like today's iPhone equivalent being AI, except you don't have to have exclusive contracts. It's mostly open source. And if you put the time and effort into it, you can use it for free.
00:34:10:00 The question is — Why hasn't the Fortune 100 woken up to this reality? And why hasn't Wall Street -- why isn't Wall Street asking those questions? What are you doing with AI? What is your strategic advantage? What's your data advantage? How are you protecting data rights?
00:34:25:10 Because if you start asking that and create a Fortune 100 AI list and say, “We're going to measure you on what's ahead, what's the next five years,” then you can blend profit, purpose, safety, trust, because that's what's needed to be on that list.
00:34:38:19 And if you're going to be the most valuable AI company in the world, that's an incumbent advantage. The banks, the healthcare companies, the providers, all of whom have accumulated 50, 70 years of longitudinal data on all of us. Well, how do you leverage that? And why isn't that a strategic advantage? Well, in that question lies the answer to how do you make it safe?
00:34:58:14 Because then it's about profit. And let's face it. Our country is founded on the principles of capitalism. And also, if you leverage that culturally, then I think we win.
MANOJ SAXENA:
00:35:07:19 I'm going to hire him as my marketing spokesperson. That's the story line exactly for the control layer. But I think the -- so going back to that comment, though, if you look at all the excitement and all the buildout, and then you look at the issues we just talked about here, the lack of a semantic layer, the lack of a guardian agent, the safety OS,
00:35:26:22 What I'm trying to get to is — what do we expect over the 18 months here? I mean, do we see this continuing to bump up like the way it's going up, or is it going to pull back and then find its way till we invent these other pieces of infrastructure and guardian agents? What's the scene like?
00:35:42:19 I know it's -- we've been through it before. Is there another dot.com bust that's coming in AI?
RAJEEV RONANKI:
00:35:48:13 100% that's coming. The question is — can you actually predict which ones are the ones that are going to crash? That's the more important one. It's inevitable that every time--
MANOJ SAXENA:
00:35:57:00 The first one is what I was trying to get to you is--
RAJEEV RONANKI:
00:35:58:14 Well, of course, it's going to be the case.
MANOJ SAXENA:
00:35:59:20 Asking that question in the heart of San Francisco, it has an important point to that. Yeah.
RAJEEV RONANKI:
00:36:04:15 I think your guess is as good as mine, but it's no later than two or three years there is a bust coming. Right. But then the question is — who survives on the other side? And it's less about the whatever Pets.com equivalent of today's AI is. The question is which incumbents will realize their advantage and ride the trough to the curve that's coming.
MANOJ SAXENA:
00:36:26:10 Questions from the audience.
FEMALE PARTICIPANT 1:
00:36:28:01 Thanks for the panel. My question is — when you were talking about risk and governance, you were talking as if everybody is a good agent. But like, let's be real, right? We've seen and we know corporations are able and willing and have been not good agents. Right now, we're like living in a paradigm unraveling kind of the time.
00:36:49:09 Right. And government is for sale. Every piece of the government is for sale to the highest bidder by the very people who are we saying are supposed to be good agents. Like, realistically, realistically, what do you think we can do, like if there is anything at all? I don't know.
MANOJ SAXENA:
00:37:08:14 Natasha?
NATASHA ALLEN:
00:37:11:70 Not wait. Right. We're saying implement at every level, right? Whether it be at the state level, the regulatory level, at the company level. Right. I think it's one of the things -- I agree with you. The government is all over the map, and the switch from responsible AI to just innovate, that threw everything out the window in terms of transparency. No bias.
00:37:32:04 See, there are no guardrails. I agree with you 100%. But that's why I'm saying it's incumbent. And I think this is what Jenn was saying. It's incumbent on the individuals, the organizations, the large companies to set the table, the regulators to set the table and set the stage. Because if we're going to wait, we're going to wait forever.
MANOJ SAXENA:
00:37:48:20 Great question. Thank you.
FEMALE PARTICIPANT 2:
00:37:50:22 Hi, folks. My question is around kind of how companies might or might not be held responsible for their technology or technological products.
00:38:04:90 I think historically we've not really seen it being very much regulated or companies have been held responsible. So, do you think that AI or AGI is going to be something similar? Or if we do end up with a governing layer, it will require some kind of catastrophic event that then kind of brings people together? Thanks.
RAJEEV RONANKI:
00:38:30:05 Yeah. So, I think it's no different than how you would treat humans. Right. So whatever code of conduct exists for humans in a company would exist for the AI. And so, if you know the SEC rules, for example, they're violated, then there's consequences. So, think of AI as a proxy for humans and the same rules that’s apply, if not, even more expanded.
MANOJ SAXENA:
00:38:50:17 Yeah. To use another analogy, I think of AI. AI will evolve into how chemicals have evolved. There are chemicals in my toothpaste and toothbrush. I have no problem. That's my internal HR AI. There are chemicals in my paint, but I want to make sure there's no lead in it. And then there are those chemicals that is painting a nuclear facility and that has to be handled.
00:39:10:23 So I think AI, there will be different versions of AI used for different applications, and what you call a safety OS. And the trust requirements and the regulatory requirement for them, and EU AI Act has done that — they have called out X number of applications that you should not go there. Right. So, I think it's that. So, it will be context and application-specific approach is how I see companies taking.
EVA NAHARI:
00:39:32:10 And I think disastrous events will happen and that will accelerate it.
JENN KOSAR:
00:39:44:07 Yeah. And I also think what will challenge what I agree is the framework that yes, the companies are responsible, the people involved are responsible. All of our existing frameworks and regulations apply. And this is what I advise boards about a lot. It's what I advise senior management about a lot is the ecosystem is incredibly complex.
00:39:56:02 It was already complex in a technology architecture. It's even more complex now. And does everybody really understand what layers are providing what, you can -- I agree contracts are important, but does everybody really understand what's happening and what you're getting from your LLM provider, or your data provider, or your cloud provider, and I will tell you, I don't think people really understand. And there's a lot of work to do to clarify that.
00:40:21:11 And it is going to, unfortunately, take an event. Maybe I hope not catastrophic, but it's going to take an event to clarify that.
MANOJ SAXENA:
00:40:27:20 Context-specific contracts. So, any other question?
NATASHA ALLEN:
00:40:30:12 Can I just add a couple of points? So, I think when you're dealing with AI, I think you have to go state by state really. That's the problem because it is kind of patchwork. Every state has their own legislation, their own flavor. Sometimes it's hidden in privacy legislation as well. I think when you're talking about agentic AI, courts have been taking the position that agentic AI agents are kind of like employees or contractors, right — extensions of the organization.
00:40:54:03 So the liability can actually come back to the organization itself. Another thing you're seeing, like I said, it's the allocation of risk. At this point in time when there's no real registered legislation in place, you're allocating risk. So, I agree with you. People may not understand their contracts. Great lawyers in the room. I think you should talk to your attorney to figure out what you're allocating and who's taking on the risk in terms of the use of AI agents.
EVA NAHARI:
00:41:18:06 I also think we can do more about visibility, again, like building things into the actual technology of visibility, because that will proactively serve what will need to happen. And I think, if you get any message here, it's like, participate yourselves, do what you can proactively. Like, that's the message that I would like to say.
MANOJ SAXENA:
00:41:40:20 Educate and activate. It's a great question. And the question again to reframe is — where does civil society fit into all of this? And I think in terms of the conversation -- and I'll have a little bit of a statement, I would love to hear your point of view. This is the essence of what I have dedicated my life to, this problem of what will AI do for us.
00:42:01:16 What will I tell my grandchild when she grows up is — as you were there. You were there with Watson. You were there with agentic AI. You made a lot of money out of this. But why did you not put the right things in place? And that's what drives me at this stage of my life. I can't double my life anymore.
00:42:15:10 My runway is getting shorter. So, the purpose of my whole thing is to figure out how do you help deploy these technologies in a way that is beneficial and sustainable? So, there's a whole sustainability angle to it on electricity, carbon and stuff like that. But when I look at it from a society perspective, I think there is some
00:42:34:10 tremendous amount of good this is going to create, tremendous amount of abundance this is going to create. Healthcare. I mean, we are going to see in our own -- in the next 10 years, we will see lifespan being extended by 10, 15 years. We're going to see elder care being done through exoskeletons and robotic pieces we have not even imagined yet.
00:42:51:17 Biology, which is the largest analog and a frontier that AI has been opaque to computers now is getting opened up. So, I think there is a tremendous amount of benefit the general society is going to get out of it. The biggest issue I see outside of the issue of harm that we discussed is how do we make access to this technology equitable.
00:43:12:12 How do we prevent happening with AI what happened with electricity? Where most of the countries below the equator got left behind for 50 years, and most of AI today, what I look at from a society perspective, how do I make sure that an eight-year-old girl in Nairobi has the same access to AI as an eight-year-old in New York? Right.
00:43:30:08 Those are the societal issues, I think, which we will fix. We will. But we have an opportunity because, as Rajeev said, we have networks now. We can instantly get this technology around the world. With electricity, you have to get the generators and the technology out. Now we can deploy it.
00:43:46:02 So, I think a big part of it starts with finding citizens who care about this stuff, finding ways for us to get engaged and say — How do I join nonprofits not making a block for responsible AI Institute, but and how do you enable this stuff? How do you bring these issues to surface? And then what do we do as a volunteer to enable this so that our grandkids can have a great future?
RAJEEV RONANKI:
00:44:07:22 One thing to add, Manoj. So great question. 2023, the inaugural TED conference I did a talk called ‘The Prompt Is You’. Nothing has really changed. The prompt is you; the prompt is all of us. And we have to take responsibility for that.
MANOJ SAXENA:
00:44:22.50 Last question?
MALE PARTICIPANT 1:
00:44:24:40 And Rajeev Ronanki has a great book that answers the question, if you guys don't know.
00:44:26:00 That's a small plug. I want to switch gears and ask a really important question. So, I'm Louis Lehot from Foley. And my question is as follows. I'm a Silicon Valley person. And I think it's very popular. The only bipartisan consensus in the United States right now is that let's go get the tech bad guys and screw China.
00:44:46:14 Those are the two things that everyone seems to agree on. And one of those things is let's go regulate AI and let's -- and my response is — okay, maybe after we figured out how to make money, that would be okay. But from -- for right now, most of us are burning piles of money trying to figure this out. What are things--
00:45:07:12 And I'm looking at the CEOs here first and CPOs. What do we need from the government, other than leave us alone? Number one. How can they help us? And what should we in the tech ecosystem -- because we're -- we learned earlier, we are entrepreneurs and investors in this room, what do we want Washington to do to help us?
EVA NAHARI:
00:45:30:16 Give us money. No, I think -- jokes aside, I think there's knowledge exchange, like tech people, some right out of college starting companies, like Woohoo. But there's an experience and knowledge exchange bidirectional.
00:45:55:03 Like, we can teach the government about tech and educate what's doable and not. We can help in that process, while government can teach us what's right or wrong, I guess, in the longer perspective and bring the outside world perspective in to this bubble.
RAJEEV RONANKI:
00:46:12:13 Louis, I mean, it's not the government's money. The last time I checked, it's our money. And so, the loop here is, a lot of tech innovation comes out of defense spending, right. So, whether you agree with the outcome of that spending or the research part of it, the research part of it is unquestionably innovative.
00:46:31:00 And that gets exported into commercial tech. And then we create profit. And then the government has to decide the policy framework for that. So that's all it is with AI. So, let's make sure the DARPAs, the defense spending that's happening on the research, also accounts for safety because safety is required for scale.
MANOJ SAXENA:
00:46:52:15 And we've been here before. With semiconductors we have done this before. When Japan was ascending, the US -- I mean setup all sort of an MCCs, the unit was setup, industry and business collaborations, funding was brought about. The difference here is this is a hold if -- I don't call AI artificial intelligence anymore. I call it alien intelligence.
00:47:12:09 We have summoned alien intelligence, okay, because of the behaviors I'm seeing in the last six months. And I don't mean to be just to kind of be dramatic, but that is what it is. We do not know how this thing is going to evolve. So, what we need the government to do, from my perspective, is three things.
00:47:26:10 One is establish clear fair frameworks of what good looks like. Okay? It doesn't have to regulate, but set a target. Second is foster greater amount of collaboration between industry and government. Third is, as the biggest buyer, start putting procurement processes in place so that not someone who can buy a dinner ticket is the only one who gets access to it.
00:47:46:05 but startups like Vectara gets access to it. So how do we really sort of foster those three things to happen? And the thing about it is we have done this before. We've done this before in the ‘60s and ‘70s with semiconductors.
NATASHA ALLEN:
00:48:01.60 But I think—
MANOJ SAXENA:
00:48:02:30 One last comment and we'll wrap up. I'm getting that look.
NATASHA ALLEN:
00:48:04:50 Oh, sorry.
MANOJ SAXENA:
00:48:02:00 I don't want to mess with it.
NATASHA ALLEN:
00:48:03:02 So the one thing I would say is touching on the learnings. So I think there are some states that have incubators where they can innovate as much as you want. Right? But I think from that everybody's learning in terms of where are the guardrails in a safety net. So, I think learning is the key thing. And it doesn't have to just be in the US.
00:48:18:13 Talk to our friends around the world in terms of what are you doing, how are you addressing this, so that it is a global issue.
MANOJ SAXENA:
00:48:24:17 Absolutely.
00:48:25:23 Thanks to the panel.
As AI adoption accelerates globally, aligning governance across different regions is becoming more complex and more critical. We explore the evolving international AI regulatory landscape, where governments and standards bodies are responding to growing concerns around trust, accountability, and risk, while still being mindful of the need to enable innovation. Key questions explored include: How can we ensure that agentic AI aligns with our intentions and legal responsibilities as owners? How can companies self-regulate at speed, balancing safety with innovation? And what role should governments play in ensuring that trust and oversight are integrated into infrastructure by design?
Jenn Kosar, Partner, US Assurance AI Leader, PwC
Eva Nahari, Chief Product Officer
Natasha Allen, Partner, Foley & Lardner LLP
Rajeev Ronanki, CEO, Lyric
Manoj Saxena (Moderator), CEO, Trustwise
© 2017 - 2026 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. This content is for general information purposes only, and should not be used as a substitute for consultation with professional advisors.