SHIRIN GHAFFARY:
00:00:03:20 Okay. Thank you everyone for being here today. I'm really excited for this panel. I will just briefly introduce our panelists. We have Robin Braun, VP of AI Business Development, Hybrid Cloud at HPE. We have Dan Priest, Chief AI Officer, PwC; Paul Stathacopoulos, VP of Product, Global Focused Categories and International Cross-Border Trade at eBay.
00:00:33:04 And last but not the least, Prem Akkaraju, CEO of Stability AI. Thank you everyone. All right. So, I'll start off with saying that, I hear all the time and we all know that majority of Fortune 500 companies are experimenting with AI pilots, but we have fewer examples of clear ROI at scale.
00:01:04:10 What is one concrete example from within your organizations where AI has demonstrably moved the needle financially or otherwise? And how do you make that work even when you know where others have failed? So, I want to start. Well, we can start with whoever has one that comes to mind
00:01:23:15 and also Prem, I know that you can speak on behalf of your customers who are using Stability, who may have some use cases to share.
PREM AKKARAJU:
00:01:29:15 My future customers.
SHIRIN GHAFFARY:
00:01:30:20 Or your future ones. So, whoever is itching to start, please go ahead.
PAUL STATHACOPOULOS:
00:01:35:21 Maybe I can jump in. So, eBay is a two-sided marketplace. We have sellers that list on the platform. Listing is like the primary job of the seller. Sometimes they can take many, many minutes to actually list an item. We've about a year ago started building AI capabilities to help sellers list faster.
00:01:57:15 We now have listing down to seconds and not minutes. We've had about 300 million listings that have been created using variety of different models. And we have about a half a million listings a day that are actually going live on the site. What this has driven is it's driven increased inventory on the site.
00:02:19:09 And driven increased overall GMV and revenue for the company. So, it is very material.
SHIRIN GHAFFARY:
00:02:24:19 And can you say what AI tools that you're using to do that?
PAUL STATHACOPOULOS:
00:02:28:12 For that? So, I can say in general what we work with. So, we have a whole host of internal models that we've trained on top of 30 years of eBay data. And then we do a ton of work with OpenAI, with Google, with Microsoft across the board.
SHIRIN GHAFFARY:
00:02:44:20 All right. Great.
PAUL STATHACOPOULOS:
00:02:46:04 So I got to get you past our security controls.
DAN PRIEST:
00:02:54:03 So, we started the AI journey as we were talking about few years ago. And code generators were a first, highly-industrialized use case. We spend millions of hours a year at PwC putting large applications into production. And so that was one of the first areas of focus. And it was also the source of a lot of learning.
00:03:21:05 So not every team got the same result. But the teams that pulled the best human talent and the teams had pretty good data. Not perfect, but pretty good data. That applied and we use a combination of Microsoft, Copilot, Copilot Studio, OpenAI. We've got all the major models, but we were seeing 30% plus in efficiencies on putting applications into production.
00:03:51:07 We just published a case study with Southwest Airlines where we did similar work. They were doing their crew management system. We did all the design work in about half the time and achieved a 90% acceptance rate among users for the user stories, which, if you've ever done design work, that's a pretty high success rate.
00:04:13:17 So, I know there's concern out there about ROI. I talk a lot of that up to we're on an experimentation journey and you need to figure out where it was ready for prime time. And experiments sometimes will fail. It's not a reason not to move forward with your AI journey. There are some really hardened use cases out there that are producing pretty compelling results.
SHIRNI GHAFFARY:
00:04:39:06 Great. And what tools are you using?
DAN PRIEST:
00:04:41:08 So, OpenAI, Copilot Studio, Semantic Kernel. It depends and with the clients, it depends on the choices they're making about their architecture. So again, Claude, Salesforce, AWS Bedrock. We'll use them all.
SHIRIN GHAFFARY:
00:05:02:04 Okay, great.
ROBIN BRAUN:
00:05:04:12 And I think, when we look at it, it's looking at the, not only kind of from a code out with customers, but also internally, particularly from a development standpoint, because what people think about HPE as like the cute, blinky-light servers. There's like a lot of stuff that goes into it and a lot of software development that goes around it
00:05:27:18 when you look at kind of the entire suite of what we're doing. So to your point, Dan, exactly where starting with our code generation, how are we leveraging GitHub, how are we relooking at our processes, not just the technology? Because I think we get so caught up in the technology conversation and we miss that we had to change kind of that code process and that review cycle.
00:05:51:08 How do you leverage GitHub? How do you leverage the code generators to be able to move faster, which doesn't necessarily replace our developers, but lets them move at more of the speed of light that Nvidia talks about. From there, we've also done kind of internal implementations of like Chat HPE to be able to speed up, because if anybody works at a larger organization, finding information is normally like the hardest thing we do.
00:06:15:16 Like our internal webs are where information goes to die. We all understand that. So how do we make it easier to easier to look up? That's not making a better directory structure. That's making a better interface for us to search with it. And we all think about just asking a question anymore. We don't think about the fact that it's nested like five directories down.
00:06:36:17 And so I think that those are the things where you start to see that productivity improvement of just those day-to-day moments. And I think you made a great point about that measurement of, it's technology that works today, and it's those moments of efficiency that actually do two things. They help with the efficiency, but also lower the frustration of trying to get the job done. And that's hard to quantify, but an important part of what we look at.
SHIRIN GHAFFARY:
00:07:02:03 Okay, great. And Prem?
PREM AKKARAJU:
00:07:02:16 So one, I need to talk to your security people because, I actually, WPP is one of our investors and actually they're the only investor in a generative AI company as ours because we actually went through, we take security so seriously, actually. But it's okay because, I'll get over it, Paul. But, yeah.
00:07:23:19 Exactly, we'll talk, exactly. This is what we're meant to do. But I think there's certainly been no technology that I have seen that goes to prototypes so fast as AI has. And I think that's why we see so many like failure rates. Right. Because it's the fastest to prototype and the slowest to production. It's extremely difficult to get a scalable product.
00:07:46:00 And that's why all of our customers that we have worked with and we've worked with. Yeah, I know I can't mention that. I think I can mention one for sure. Lenovo — already is one of our clients, and that's been public. So we have several Blue chip clients, but we've found success with them because we have really embedded our teams with their teams
00:08:08:14 and really coming up with scalable solutions where we really dial into the problem and put the work into fine tuning solutions for them for their very, very highly specific needs. And MIT, I think what underscores this like really perfectly is, their recent report that said, like 95% of AI out there was pretty, pretty worthless
00:08:33:16 and that 5% was really only getting that true ROI. And of that 5%, there was about 67% of them actually work with companies like ours that put embedded, forward technology deployment into their company. And they're really finding the results. The other balance is like highly technical companies.
00:08:55:18 And so they were able to do it themselves. But that's really where we've been finding the most success is really that highly customized work and really dialing in the solutions for them. That's the only thing that we've seen that provides scale. Otherwise, you just don't know what you're going to get.
PAUL STATHACOPOULOS:
00:09:18:00 Yeah, I think it's really interesting because there is not one model to rule them all. We've spent a lot of time at eBay trying to really understand what models are good at. Right. So, for example, OpenAI is really good at item detection. It's trained on the internet. You can point it at a picture of any item and it will give you a really good high-quality description and identification of what that item is.
00:09:44:20 And then there are other models that are really good at writing a listing or doing things like that. And even on the code side, we have 100% of our engineers are using some code augmentation technology, whether it's Cursor or Copilot or something else. And even there, it depends on what part of the stack they're working on.
00:10:07:21 eBay is a 30-year-old company. Right. So, there's lots of proprietary code that sit around. And so there we need to have models that have been trained on our actual code. So, we're spending a lot of time doing that. Whereas like the higher-level front-end code, we can use more public (Unintel Phrase ___10:24) to work on.
SHIRIN GHAFFARY:
00:10:25:14 Well, Prem, you had started talking about actually one of my questions, which was that about these sorts of viral studies that have been going around right early sort of lessons from these AI pilots. One being that MIT study, I think that was probably the one I saw the most going around.
00:10:41:04 There was also a Stanford and Better Up study recently about AI work slop, showing that high percentages of people they surveyed said the output for some of this AI actually took workers more time to sift through because they didn't think it was the right quality and therefore wasting their time.
00:10:56:22 So I'm curious, what are the discussions that, have these studies come up in your work as you're thinking about how to use AI strategically, and how do we reckon with these types of reports at the same time that we're all talking about the great promise of AI in the workplace?
PREM AKKARAJU:
00:11:12:11 I think Paul, you nailed it. It's pretty interesting. When I took over Stability about a year-and-a-half ago now, the first thing we did was start utilizing multiple models. So in that, my key pitch to all of our companies that we work with is pick the team, not the model, because there is no God model.
00:11:31:14 There's no one thing that, it's very strange even, because it's just kind of overfitting for one particular use case and when one model does hands really well, the others don't, and these types of things. And so, I think that is understanding that was the first challenge. And then understanding that we have to bring in.
00:11:56:10 And we use a lot of open-source models. We use our model obviously, but they're all based on MMDIT, which is actually the framework for just about every single image or video model, which we actually invented and actually own the patent for, actually. And so, our team is really well versed in that framework. And that familiarity gives us a huge advantage because then we can kind of manipulate or really fine tune these models really, really quickly, and we just kind of pick the best of each one and assemble that into this cocktail of a solution.
00:12:29:13 And that's what companies pay for. They don't pay for technology, they pay for solutions.
SHIRIN GHAFFARY:
00:12:35:21 And, Dan, I know on our prep call, you were mentioning that study and you had some thoughts to share, so I'll call on you.
DAN PRIEST:
00:12:42:04 Yeah, I'm not a fan of it. Because it's a few hundred hand raisers who said they've got an opinion about the ROI. It went back to the early days of gen AI and measured, I think it was like 350 examples. There have been tens of thousands of examples, and it's penalizing a lot of people for taking risks and experimenting at a time when we want to encourage taking risks and experimenting.
00:13:16:13 And I would encourage every single leader to have their aha moment, the moment that gives them conviction that this stuff actually works. I've had mine, but we're building it into client solutions. We're working with $100 billion telco right now and they are transforming their business with it. And you get really hardened results. There were just great lessons learned from all of that experimentation.
00:13:44:13 And now we're seeing it pay back. It was emotionally resonant. I mean, everybody was saying, yeah, this is really hard to work with. But it kind of misses the point. We need to shift from being just efficiency focused to more innovation focused and innovation requires you to experiment.
SHIRIN GHAFFARY:
00:14:03:14 Robin, anything?
ROBIN BRAUN:
00:14:04:21 I was also saying, and you bring up a great point, Dan, around that. When we look at how AI is changing, like the way we would have implemented something two years ago, when you start to look back or even a year ago is not remotely how you would approach it now. And that's why, like we went out and we released a five away compliant with one of our, one of my Unleash AI partners.
00:14:27:15 It was a, and what I love about it is it's using those multiple models on a very specific use case about helping people to make their websites and information five away compliant so that it's accessible for people with disabilities to be able to leverage it. It is completely agentically done.
00:14:45:05 It's not something that if we looked at it a year ago, we could have solved in the same way. The models wouldn't have been ready, the technology would have never been ready. The ability to write agents in that way wouldn't have been ready. So that would have been a failure like two years ago, if you think about it.
00:15:00:03 But now it's fantastic, and we've released it out there as a joint offering that we can go out and talk to state and local governments about and how to make this economical and really look at AI for good, but doing it in a new way that when you look back over time, but all of those experiments that we've done before, now let's do this with success and confidence.
SHIRIN GHAFFARY:
00:15:22:04 How much of the challenge in adopting these AI tools is about just kind of workplace culture change, like getting people used to using AI rather than whatever tool or process you're using or how do you over? Yes. Okay. And how do you, do you have some examples to share when people were maybe resistant to using it and how are you sort of asking them to change the way they work?
ROBIN BRAUN:
00:15:43:01 Well, I would say that it's one of the most interesting things, I remember I was with a healthcare customer and we were talking, I was talking to their chief nursing officer and we were talking about like, kind of all of the efficiencies that they could gain. And she was like, you realize that the people we're talking about doing this, actually, their entire impression of AI has been formed by like, Hollywood?
00:16:07:21 And so they're assuming that, like, this is like this, this horrible thing coming to track them and get them as opposed to how what we were thinking about — that it was just positive. So, I think sometimes depending on the dystopian view they may have been given — that we have to work with our people to help them not be afraid of it
00:16:32:08 because it's not taking their job, but also to help with kind of the curve of it's actually here to make it easier. And it's really interesting. I've just hired a team in Puerto Rico and like, they are fantastic at like all of a sudden, I'm getting all these meeting notes and all of these things and everything's AI generated and I'm like, this is fantastic. But they came in and they were like to keep up with the speed we were going at,
00:16:55:22 they immediately adopted AI, and it was great to see kind of leveraging the tools to make their day easier.
DAN PRIEST:
00:17:02:23 Yeah. So, I often joke that, I spend half my time working on artificial intelligence and the other half of my time doing therapy on people reacting to artificial intelligence. There's just such a strong human reaction, but the lessons learned are interesting. So, there's sort of the citizen-led adoption, like trying to make it as intrinsic to the way we do business as possible.
00:17:29:13 And frankly, that's hard to get the big results that you can feature as like here's an example of ROI, that usually gives you 5% productivity gain and you're not going to do much with that. People will go home a little bit earlier, they'll have a little bit more time for their coffee break. They'll be happier, which is good.
00:17:49:07 But you're not structurally changing the business. So there's been a flip where the leaders have to understand their strategy. They have to understand what's going on in their industry, and they have to pick their spots. And when they pick their spots, it's not just what proof points exist in the industry. It's got to be where do I sit on?
00:18:10:21 One of our lessons learned is pull your best talent. If you're just giving AI to the person who's available, there are some really hard engineering problems that need to be figured out. There are some really hard business problems that need to be figured out, and you need leadership behind it. And you need great talent. In this age of AI, talent has never mattered more.
00:18:27:20 And so find those spots where you've got talent, deep domain expertise, and pretty good data and some evidence in your sector that this focus area will pay back, and then you can go after it aggressively. And you need those proof points within your own organization to win hearts and minds.
SHIRIN GHAFFARY:
00:18:48:01 And what advice do you give people who are coming to you for a therapy about their existential crises, using AI?
DAN PRIEST:
00:18:54:11 To use it, but not just use it. Be imaginative with it. This is my bone to pick with — the lack of experimentation. I say go out and experiment and be okay. And as part of the role of the leader is creating a safe space to experiment and to fail. And if you're doing that, you're learning.
00:19:15:23 And if you're going to be a whole lot more creative about how you apply it and not everybody, frankly, not everybody has to be an AI expert. You just kind of have to bring down the barriers of resistance.
SHIRIN GHAFFARY:
00:19:27:21 And I think a big fear that everyday people have when I talk about AI in the workplace is, well, am I just going to be using a tool that will eventually replace my job? And so, I wonder how you all are sort of handling that anxiety and if it has impacted hiring at your own workplaces so far?
DAN PRIEST:
00:19:46:05 I mean, again, going back to the leadership and this will be a theme. I got advice a long time ago, doing a big transformation from a CEO who said, be clear about not just what will change, but what won't change. And I think that's very true. In this age of AI, you have to kind of know what is the enduring value proposition of the human and be clear about its relationships, its decision making, it’s the creative process, it's strategy.
00:20:16:20 There are all sorts of qualities that the human possess that the AI doesn't, and you can build a moat around that. And then once they know, okay, I've got a value proposition and it's going to endure to the end of this AI journey, they're going to get a whole lot more creative around everything else, and they will disrupt in a way that creates value but also feel safe because again, they know they're going to be a part of their end solution.
SHIRIN GHAFFARY:
00:20:46:06 And how about the rest of you? How are you dealing with that?
PAUL STATHACOPOULOS:
00:20:50:00 It's interesting for us, I think, I mean, people, it's like any adoption curve. Right? So everyone starts with fear of the unknown. And everyone has the view of the dystopian future and those types of things. But I think you have to do a few things. First of all, in the company, you have to just celebrate the use, the success, the failure.
00:21:14:03 Like, literally on stage in the company. Everything from interns using it to build a prototype to lawyers using it to review contracts, like every aspect of it, to not just expose people to it, but to actually show them where this innovation is happening at a very deep level.
00:21:36:12 And then I think there's an interesting conversation to be had around what AI is actually used for and how it changes the actual role that you're doing in the company. The first step in adoption of anything is you use it to replace the things that you've done naturally as part of your job.
00:21:56:04 So product managers, right. PRDs, now we're going to have, ChatGPT help me write PRDs. But we're still getting to use PRDs. And the conversations we've started to have is more around if you leapfrog past this, like what is the actual model like. Because we're not we're not going to keep doing the same workflow that we've done for the last 30 years in tech, right?
00:22:20:05 We're going to develop a new workflow and you may not have a PRD the product manager is building. You may have an agent that is actually creating a data document that the models can actually take and use to help produce the code to create the first prototype that you're then going to put through a design review.
00:22:34:20 Right? So, it's definitely you have to think about it differently and try to step like past the point that we're at today where we're just trying to replace the very like simple steps that we're doing today.
SHIRIN GHAFFARY:
00:22:50:04 And Prem, yeah, with you. Film makers have such pride in their craft, right? So, how do you and…
PREM AKKARAJU:
00:22:57:04 I was about to say there perhaps is no greater resistance from any industry besides the filmmaking industry to AI, which is the industry I'm from and also an added challenge is James Cameron is actually one of my investors on my board who created Terminator. It's also not helpful every single time.
SHIRIN GHAFFARY:
00:23:18:13 Robin's favorite movie.
PREM AKKARAJU:
00:23:21:14 Yeah, exactly. Great movie. But man, and so, so yeah, not helpful sometimes, but I can tell you, and I love what you said, like, what's changing, what's not going to change. But this is how I and it's funny because I'm from the traditional filmmaking world and visual effects and animation.
00:23:41:06 Now I'm going back with AI tools and I had software tools before. By the way, nobody resisted my software tools when I was the CEO of Weta. But now this has like this kind of amplified fear factor. This is the way I pitch it. I pitch it very simply. And I have two kids, and I just say, do you want your kids to do this job?
00:24:03:21 If the answer's no, then an AI should do it. And it's just, people get it right away. And so, for example, in the filmmaking world, there's a thing called rotoscoping, which is like the most brutal. Yeah. You know, okay, I love it. There's always someone who knows exactly what I'm talking about. And you wouldn't want your worst enemy to be a rotoscope artist.
00:24:26:12 It is an entry-level thing. Just think about doing something by hand, by pixel, by frame. And there's 24 frames a second. It's a brutal, brutal thing and that no one ever said, I can't wait to be a rotoscope artist my whole life. No one said that ever. And they do it as a means to an end.
00:24:46:07 And so that's a great example because they want to be an artist, they want to be an animator. They want to be a creator. They want to put scenes together. They want to be an assistant director. They want to elevate their job because they want to take more rudimentary work out and more creative work in.
00:25:03:14 So that's a great example of, okay, I want that to do an AI, and I want my kids to or myself to be doing the more creative work, which is, of course, why I even started to begin with. So that has really resonated. Well, I think in the industries that we're focused on, which is film, gaming, music and marketing and advertising.
SHIRIN GHAFFARY:
00:25:29:20 So since this whole generative AI boom kicked off really a little less than three years ago now with ChatGPT, where have you seen the technology for actual enterprise use improve the most? And what's sort of on your wishlist where you're like, it's still getting this wrong. I wish it would get better at that. Does anyone have?
DAN PRIEST:
00:25:51:21 So for us, what we’ve seen as it started as we talked about around tech delivery, putting assets into production with code generators, but then it quickly moved to the front office around customer service. And we all heard examples of hallucinations and the agents getting it wrong as they supported customer contacts. That has gotten really good.
00:26:16:03 And so, we did work with Wyndham. Scott Strickland, the CCO there, sponsored a project. And it's just too early a question. I think that's a Sales force on AWS architecture. And about a third, when I recently checked in with them, about a third of all contacts now, so these are guests calling into their contact center, are handled by agents autonomously.
00:26:47:09 And that, I mean, that's pretty big. And the customer satisfaction is high. Resolution times are better. Fewer escalations. And then what they're moving towards is. So humans, we process information very efficiently. But our conscious minds, have a harder time with a lot of data coming at us at once. And so, when you're dealing with a customer, you tend to, your conscious mind tends to zero in on one thing.
00:27:22:04 Agents are really good at paying attention to all the data flowing in that moment. So if they're leaning in, they're more engaged. If they're hovering on something, they're engaged. You can hear intonation, signaling frustration. And the agents can now start to coach the human rep on flip the script, try this. They're engaged. Convert. Right.
00:27:46:19 There's coaching happening real-time. And so, that's the next phase. And you can see some great examples out there where that real-time sentiment analysis, the behavioral indicators that evidence intent to buy are being used real-time. It's not just stored data, it's real-time data flowing in very effective ways that grow the top line.
00:28:10:16 And I think that's innovative, right. That's very cool.
SHIRIN GHAFFARY:
00:28:15:02 Great. Anyone else has examples where the technology has really improved and where it's sort of still lacking. You want some stuff on the product wishlist?
PAUL STATHACOPOULOS:
00:28:24:19 I think for us, I mean, the code side hugely, productive for the teams. Once engineers realized that they could shed jobs that they didn't want to do, like early in my career, engineers spent, I don't know, 80, 90, 100% of their time actually writing code. Today they don't write. They spend, I don't know, ten, 20, 30% of their time writing code.
00:28:50:20 The rest of the time is in meetings. Doing PRs, doing all these structural things. Setting deployments can fix, and we're building agentic approaches to the majority of that so that engineers can get back to actually being creative — building things, coding, collaborating. So that's been really powerful for us.
00:29:13:15 I think some of the surprises for me is seeing it going into other parts of the back office, into legal analytics, parts of our front office, and then some of the areas that we're starting to experiment with multi-agentic approaches, where you have multiple agents that are doing different tasks on the same job.
00:29:33:00 Right? So, one is actually being the critic for the first agents. So, using it to control hallucinations and using actually different models to do that work. We still need to get it at the cat fingers. Occasionally, we get an extra leg showing up in an image that's generated, but, I think overall, the hallucinations have dialed down, and we've learned to put a human in the loop in the places where we're worried about it.
PREM AKKARAJU:
00:30:00:09 We definitely have humans in loop for a lot, for everything that we've done, even for our enterprise clients. There's always been right now, a human in the loop for that very reason. And it's funny. It's a hallucination when you don't like it, and it's just creative when you do like the output. And so that's like the catch all thing, but it's.
00:30:23:04 So it's not a surprise that coding models are so good. It's coders that created it. And I think that that's a big certainly with us, that's why you have like, people like James Cameron on the board and highly involved. And I've worked with him for ten years and other creatives in the company because we don't believe we can create creative product without creatives in the loop from the beginning.
00:30:45:03 And that's why we don't have just AI researchers and engineers where we put the art and the science equally weighted because we think creatives are the one who are going to be able to in the same way that coders created such incredible coding models.
PAUL STATHACOPOULOS:
00:31:03:08 I think it's interesting. There was a story that came out this week from OpenAI about how they're teaching AI models to actually recognize when they're wrong, right, to have some humility. And as human beings, as children, we learn through trial and error. Right. And lots of feedback, physical, emotional, those types of things to hopefully be human beings that are self-aware, we're willing to admit our mistakes.
00:31:30:06 AI today is not great at admitting when it doesn't know something. It's like the greatest, like, BS artist in the world, because its entire job is to find the next word or the next pixel that it's putting on screen. And so, I think actually like want, like another breakthrough in AI is actually going to be how we bring humility into the models so that they can actually recognize when they're wrong and then decide that they actually have to go learn something new to do the right action.
SHIRIN GHAFFARY:
00:31:59:10 Yeah. Robin, anything to add?
ROBIN BRAUN:
00:32:02:02 Yeah, I would say that that the, I don't really hear so much about hallucinations other than from customers who are scared, because they're still hearing what we were talking about two years ago and yes, it was highly creative. But maybe not always necessarily directionally correct. When I look now, I don't really worry that much about hallucinations from most of what we're doing because of the agentic approach,
00:32:36:07 because of being multimodal and bringing in multiple models and building it out in a different way. And I think the models are better. Our approach is better. And that is we continue to do that. I'm not hearing and not dealing with the same challenges, but I think it goes to some of the fear we were talking about before, because we're all living it.
00:32:58:17 So we see that what we were worried about a year ago, we're not worried about now, but if you're not kind of mired in it, all you hear are the headlines and those headlines can be scary. Versus what I see is the reality is that the faster we can go with the agents, the better, because it's just making it better and more productive for everybody.
DAN PRIEST:
00:33:17:16 And if I could just add — it is a great point. So, I was talking before about the lessons we learned around putting big assets into production. It was really, I kind of bucket people into three personas around AI skeptics, realists and zealots. And you can start to see the skeptics because the second they hit a speed bump there, see, the technology's not ready for primetime and it becomes an excuse not to move forward.
00:33:46:16 The zealots for sure. So, like they dig in, they roll up their sleeves, they get creative about how to solve that problem, and they usually do. And so you kind of have to avoid, I mean the, the model accuracy drift, hallucinations, deception rates, they're all getting really good. They're not perfect. But they're good enough to build industrial strength solutions on.
00:34:12:23 But there are still hard problems to solve. And you can't look at every single problem as an excuse not to advance.
ROBIN BRAUN:
00:34:20:01 One thing that I think is really interesting is that we hold AI to a different standard than we hold humans, because humans actually aren't 100% correct, no matter how often we think we are. And so, we're holding AI to this standard because it's bits and bytes and it's supposed to be like ones and zeros. That we think it should be 100% correct or it's not.
00:34:43:21 And I think that that is to me a really interesting kind of engagement that we have with the machine of if it's not 100% right, then it's wrong versus even the best radiologist missed things, like when they do the study, they're actually very close, but nobody's at 100%.
PAUL STATHACOPOULOS:
00:35:05:08 Yeah. It's interesting because we took this leap, I think, in that judgment, in assuming that machines like should just operate by themselves. And so, our criticism is, well, they're getting it wrong. We have to put humans in the loop. But the reality is, as human beings, we always have humans in the loop. Right. And so, like, we should just assume that like that model has to continue to some extent.
SHIRIN GHAFFARY:
00:35:30:17 I do want to leave some time for audience questions. I know people probably have a lot. Should I just call on folks or? Okay, great. So, people can come up there to the mic.
AUDIENCE SPEAKER:
00:35:53:19 It's kind of a two-part question, but I'll try and package it. First off, there's this interesting irony where this is the most powerful automation tool ever. And several of you started off by saying it's kind of a forward-deployed engineer model where you got to get really high touch.
00:36:14:02 Is that the permanent state of things? And then secondly, kind of on the lower end at entry-level jobs, that process of doing all that scut work that it's going to replace was also a process of acculturation into an industry and into a specific company. How is that going to take place in the future?
PREM AKKARAJU:
00:36:39:09 I’ll do a really quick one. The first one I think is, no, it won't be like that. I think that you're going to be able to create a lot of scalable products that come out of that forward deployed, for sure.
DAN PRIEST:
00:36:51:22 I was going to take the second one. On the early career-stage worker. It's fascinating. We have a lot of interns, and associates right out of college and they want so desperately to have the best AI tools to work with. It's exactly what you were saying before, Paul. Like, they don't want to be bound by the legacy ways of working, right?
00:37:18:16 They want to go right into the future. And it’s frankly admirable. And they are going to be the generation that's most disrupted by AI and they are most eager to work with it. They are the most change ready and they are the most AI savvy. And so, we still we have to figure out, like, what does the future of the workforce look like?
00:37:42:20 But I am telling you, I would bet big on the next generation that Gen Z, Gen Y, they are phenomenally good with AI. We have to figure out the right apprenticeship model. So, they do acculturate, so they do develop those hard skills that we want them to work with. But that is a generation worth investing in.
AUDIENCE SPEAKER:
00:38:04:03 Hi. I heard moments of efficiency, Robin. I heard Shirin saying, work slop and on the coding side, now the load has shifted to the reviewers rather than the coders. And then you said that the barrier for entry, like for prototypes is low, so you spend a little time on prototyping, but take a long time to make it to production.
00:38:26:06 When something truly works, how do you recognize that? How do you measure that something is truly working outside of the prototype?
DAN PRIEST:
00:38:34:11 So, I can tell you, we're doing a lot of transformation programs right now. And one of the very first things that we had to do and get better at was instrumenting this journey. And what you should see is workload just sort of conceptually, workload will remain static, right? It will remain constant. And then and you baseline it.
00:39:00:06 Right. So, I was talking about contact centers, the number of contacts, the number of customer contact. That's a workload. Let's say that's static. And then you should see the number of FTEs and the amount of effort associated with that workload coming down. And you should see the performance I talked about — fewer escalations, faster call resolution times.
00:39:21:15 You should see the performance going up. All of that needs to be measured, especially around the ROI point you were making earlier. Like, what is, what was, all this new again? Right. There's a discipline about how you transform measurement as a part of it.
PREM AKKARAJU:
00:39:36:07 I can tell you from, we have very clear metrics on what success looks like for each client. And we want to make it so clear. One of our clients is eyeglass, or a glasses manufacturer, and they're and we're very clear. They want to go from nine photoshoots down to one photo shoot a year. And that is extremely measurable.
00:39:57:11 We know how to work against that. That's the ROI is built in, but it's a very clear metric on success.
ROBIN BRAUN:
00:40:04:15 And I was going to say that I think it's also it's that definition of what is that product doing and what is your goal. Because like when we were releasing 508 compliance, it was can we get the website reviewed, done like this. Remediated how long? And there were very clear metrics to hit on the correctness, on the scale and on the timing.
00:40:27:09 And once we could hit that, we were like, yes, it's ready, it can go. But I think it's being clear about the use case of what you're actually trying to do and not make it like too fluffy and squishy. Those are really technical terms. But the, but what are you trying to achieve? And then how do you know? And then what are those measures to get there?
PAUL STATHACOPOULOS:
00:40:47:16 Yeah, I think it's interesting. I think the two sides, if you separate execution to production from prototyping and early concept phases. I think it's actually much easier to instrument the latter. Right? Instrumenting the execution through the production side. Because you have like a space that you can measure and you can try to reduce the number or try to increase the number, whatever it is, the front end.
00:41:13:04 It's really interesting because in some respects it may go quicker, but we may look at ten prototypes or 20 prototypes in the space of time that it would have taken one designer in Figma to go create one set of screens for us to look at. Hey, one of the favorite things that I've seen recently, we do an intern program every year.
00:41:36:04 At the end of the interim program this year, we took a 100% of the interns and we put them in a hackathon together. So, everyone, like finance interns, like everyone, they got up in front of the executive staff, the company, they demoed their, what they'd worked on their projects. And at the end of one of them, it was beautiful what these folks had done in this one project.
00:41:59:14 And at the end of it, the two finance people came out and they're like, we built the entire front end.
PREM AKKARAJU:
00:42:06:10 Okay.
PAUL STATHACOPOULOS:
00:42:07:11 Yes. Right. Because they had tools that help them code. They went to engineers, helped them fine-tune the code, and then they actually, all of the creative and screen development was done by them. And they said it was wild because it like unlocked this creativity that they didn't know they had in their skill set. They didn't know they had.
00:42:25:07 And I talked to them afterwards and they're like, now we're excited to go think about our domain and what we work in. And how do I now build tools to help me do my job? It's fascinating.
SHIRIN GHAFFARY:
00:42:39:22 Okay, last question.
AUDIENCE SPEAKER:
00:42:41:12 I have to make sure it's a good one. Over the last few months, I've been quite fascinated by the idea of creating your organization's digital twin so that you just deploy your agents. Maybe they won't work today. They'll work six months down the line. I wanted your advice for people in the audience, in your experience, which industries’ use cases, it would make sense to create your own digital twin today? Where would it not make sense? (Unintel Phrase ___43:14) have?
ROBIN BRAUN:
00:43:12:19 Well, I think, digital twin is like one of the coolest and sexiest things to see, like to demo. But I think that there are really questions around to your point. Now, what is a practical use of it? Manufacturing floor, like that screams, like why wouldn't you have a digital twin? But it was interesting.
00:43:33:15 I was talking to an AI officer for a construction company and he's like, why don't we provide a digital twin as opposed to just the blueprints for every building or every port? As they do large-scale construction. And that starts to become a part of how people want to interact because, like, if you have a port,
00:43:54:02 there's going to be so many things over time, you're going to want to look at and change and that you want to be able to model that as and understanding that in kind of a digital twin. So I think, manufacturing was an obvious, that's huge.
00:44:08:12 But as you start to build that out, the digital twin of the body, how do you start to look at that from the, from being able to look at cures and things of that nature to the manufacturing, but then across to buildings and then when you start to go across, I don't know that there's something I wouldn't do. I think it's a question of what are you doing it for. To be able to then get value from it.
SHIRIN GHAFFARY:
00:44:31:15 Thank you everyone. Thank you to our panelists.
While AI experimentation is now widespread, many enterprises find it challenging to transition from pilots to production at scale. This panel explores why a one-size-fits-all model doesn't exist and highlights the need for a tailored blend of models, architectures, and domain expertise. Speakers discuss lessons learned from early failures, and the growing importance of metrics in demonstrating value. Key questions include: How can you maximise ROI on an investment in AI, even where others have failed? Which research into AI in the workplace should you look to when developing your own strategy? And where are businesses still making the greatest number of mistakes in their application of AI tools?
Meet the panellists:
Dan Priest, Cheif AI Officer, PwC
Paul Stathacopoulos, VP of Product, Global Focused Categories and Internation Cross-Border Trade, eBay
Robin Braun, VP, AI Business Development, Hybrid Cloud, HPE
Prem Akkaraju, CEO, Stability AI
Shirin Ghaffary (Moderator), Technology Reporter, Bloomberg News
© 2017 - 2026 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. This content is for general information purposes only, and should not be used as a substitute for consultation with professional advisors.