What to know about using GenAI in the pharma and life sciences industry

To listen to all PwC Next in Health podcasts, click here. Subscribe and listen to all episodes at your convenience via any device at Apple Podcasts and Spotify.

All Next in Health podcasts


Tune in to hear PwC specialists discuss using GenAI in the pharma and life sciences industry. Topics include:

  • Key considerations for new technology and deploying GenAI responsibly
  • Key elements to consider for monitoring regulatory activity
  • The future of technology, adoption and upskilling within organizations

Topics: GenAI, Articifical Intelligence, AI, Generative AI, technology, workforce, products & technology, digital, transformation, strategy, data platforms, automation, upskilling, governance, regulations, regulatory, regulators, pharma and life sciences, pharmaceutical and life sciences, pharma

Episode transcript

Find episode transcript below.


00:00:04:11 Welcome to the Next in Health Podcast. I'm Igor Belokrinitsky, a Principal with PwC Strategy&, where I get to help leading health organizations with their strategies and operating models. And on the Next in Health podcast, one of our favorite topics is how technologies can help health organizations create ecosystems and solve some of the biggest and most pressing problems that are facing us.

00:00:27:19 Problems like healthcare affordability, equity and trust. And today we have a big one. Today we have generative AI specifically in the pharmaceutical and life sciences industry. We’ll have another one of these conversations about payers and providers. But we're excited today to tackle this topic and we couldn't have asked for better guests and leaders to talk to us about it

00:00:50:11 today. We're excited to have Ilana Golbin Blumenfeld and Sid Bhattacharya, both leaders in the space at our firm, and I'll let them both introduce the topic and themselves here in a second. But what we're going to do today is essentially every conversation we're having out there with industry leaders, unprompted, they're asking us a bunch of questions about Gen AI, and I'm going to ask those same questions to Sid and to Ilana today.

00:01:16:19 So it's going to be an exciting conversation. So welcome both to the podcast. Ilana, why don't we start with you, just a few words of introduction.


00:01:24:11 Thank you, Igor. It's great to be here with you today. I'm excited to talk about this topic with you and Sid. My name is Ilana Golbin Blumenfeld. I'm a Director in PwC’s Products and Technology Group. I'm a data scientist by training, and I've been working in the AI space for well over a decade. So, it's been really exciting to see it get the attention that I think it deserves.

00:01:42:18 Over the last few months and year especially, I wear several hats for the firm, but one of them is that I lead our work in the space of responsible AI. I have been driving our firm's perspectives, positioning and capabilities in that space since 2017, which again I think is really important, a crucial capability for us to incorporate if we want to get the value out of the AI systems that we're implementing today. Thank you for having me.


00:02:04:10 And Sid, how about you?


00:02:05:23 Yeah. Thanks, Ilana. Hello everyone. My name is Sid Bhattacharya. I'm a Principal in our Cloud and Digital Practice at PwC, and currently I'm leading our generative AI team focus in the pharma life sciences space. And the way I describe my role is essentially figure out how we can use AI, including generative AI to transform the way pharma life sciences do their work.

00:02:26:25 Part of that role for me is not only talk about the benefits of AI, generative AI, but also keep it real, like not overhype the technology and provide a balanced view. So, looking forward to the conversation. And Ilana and I, we have had several experiences doing this with other industry organizations.


00:02:44:15 Excellent. Excellent. Well, speaking of keeping it real, Sid, I want to start with you. And we were talking to an industry leader yesterday and a physician. And she said that there are a bunch of companies out that are coming and they want to do Gen AI. How can I tell what's real from what's vaporware? How can I tell which use cases, which applications truly lend themselves to Gen AI solutions and which ones are just hype?


00:03:13:25 Yeah, that's a great question. And this is a problem that we are seeing across the industry. And it's not just small startups. We are seeing the large cloud providers joining this game with industry. Everybody wants a piece of this. The way we are advising industry leaders in this space is one, this technology is relatively new. It has been around for a few years, but it got mature over the last year or so.

00:03:37:04 So it's relatively new. There's a tremendous amount of hype around the technology, but there's also a lot of reality to it. I fundamentally believe over the next two to five years, this is going to change the way we operate as a country, as an organization, as an individual. We are going to have our own AI companions and so on. So this is real.

00:03:57:01 At this point, the way to winnow through the hype is to focus on business value. Do not try to buy a technology. That technology comes and goes. Try to apply it, try to understand the question it is trying to answer. So in the pharma life sciences context, for example, can I use this technology to make my clinical trials go faster?

00:04:15:14 Can I use this technology to make my marketing content generation be more smoother, to try to focus on companies that answer key questions, provide solutions, versus trying to come and sell you a technology? That would be my quick advice. Ilana, anything to add here?


00:04:31:19 I think that's fantastic advice Sid. And I want to jump on what you mentioned, that there's quite a bit of hype around this and we might be focusing too much on tools and not necessarily enough on solutions. There is a, I would say, a sensation or some type of belief that generative AI will solve all problems that businesses have.

00:04:48:14 And because they adopt generative AI, they no longer need to invest in core data platform capabilities, core infrastructure, other types of modeling. I disagree with that. This is a powerful tool. It's a powerful capability. But the same solutions that we needed for predicting specific outcomes or providing us with automations in existing workflows or processing data in different ways, those still exist, those still are needed, and in fact, many of them are necessary in order to get the value from the generative AI systems that we're looking to implement.

00:05:18:15 So think about all of these as portfolios of capabilities that we could implement, and I think Sid is spot on; focus on the business value, focus on the problem that we're trying to solve, focus on the transformations that we want to see and then back into which technology solutions can give us value and are mature enough to give us value for each of those.


00:05:38:01 That's fantastic advice. And both of you have mentioned how powerful the technology is. And so, Ilana, give us your thoughts on the responsible use of this new power that organizations are going to have.

00:05:53:01 And I watched the video yesterday along with the rest of our firm of you describing how Gen AI is different from other types of AI because it is more creative, because it is more democratic in some ways and more people can use it. So it is a new kind of power available to more people than ever. So how do we deploy it responsibly within organizations?


00:06:14:26 Ultimately, I think this is one of the most important questions that we need to address as enterprises looking to adopt this technology, because if we can't implement it in a way that's consistent with our practices, with our standards, with our governance needs, then I think we're likely to really miss the mark and see this technology fail in practice.

00:06:33:13 That's my fear. So I think we need to have these capabilities in place to get that value. And you mentioned something which I firmly believe, which is that this technology is more accessible than other forms of artificial intelligence. Most AI systems historically were built by core teams of data scientists or maintained by super technical folks that are sat off in a cave somewhere coding.

00:06:53:12 And I'm one of those people so is Sid so we can say that. So that is different now fundamentally. The capability is designed in such a way that you are allowed to engage with it in your own natural language, which breaks down a lot of barriers. Just one of the reasons why we've seen so much excitement around it over the last few months especially, and why I believe we'll continue to see excitement over it. Going forward,

00:07:14:03 however, with democratization, we have to have a realization that now people are using technology who have not had deep training, upskilling or other type of exposure to this technology before and therefore may potentially misinterpret what it's intended to do and how you can intentionally use it.

00:07:32:03 So, for example, there have been a number of public failures we've seen in the market where organizations have tried to use certain types of generative technology to drive a specific outcome.

00:07:42:28 But because they didn't know that they now needed to be in the role of an editor and fact checker, or they didn't realize that they needed to reevaluate everything that the model produced to double check the code, make sure that it's not passing sensitive data back to the internet for instance, that it's exposing organizations to risks that they just did not expect that they would have to evaluate before. And so, some of these are new risks that organizations have to contend with, like making sure you don't pass proprietary information to third party organizations by using public platforms.

00:08:13:03 Some of these are existing risks that have been expanded, like the accessibility element here, where now everybody can use this technology. But of course, we've been on that democratization of technology journey for some time now. And then other risks, I think are actually standard risks that we've seen before. Models historically have always been critiqued or criticized for potentially having some type of systemic biases embedded within them.

00:08:37:08 Generative technology just does that at a different scale. So how do we then think about fairness and representation in the systems that we use or explainability risks? These models are incredibly opaque. Very few people on the face of the planet really know how they operate to a level that they can build extreme comfort with every single component of that infrastructure.

00:08:57:25 So how do you build trust with a system that you did not build that you can only stress test within a specific environment? It's feasible.

00:09:05:25 It's something that organizations are driving toward, but it is still an outstanding question. There isn't one technique or one capability that will solve that problem, and I think we'll continue to see some of that as well.

00:09:15:27 But bigger questions too, around which data are we going to be passing to these systems? We're talking about patient data and very sensitive information with very sensitive individuals with specific outcomes that if they're treated improperly, could have really terrible consequences for populations. So how do we treat these groups of people, our constituents, with the care that they need, even when we consider the technology that we're bringing into the organization?

00:09:41:22 So all of this comes down to governance, which is a loaded term, means a lot of different things to a lot of different people. But it's really about instituting standard practices across an enterprise. So we're answering questions consistently. We're ensuring people go through appropriate training and upskilling before they get access to certain types of general-purpose technology. We have structures and standards in place for how we evaluate third party solutions that come into the four walls of our organization.

00:10:07:11 We put our patients first, our constituents first, our employees first when we think about data and how the data is going to be used. We think about the evolving regulatory climate and what type of requirements that might impose on us and ultimately, we prioritize use cases that serve our business objectives the best with the least amount of risk.

00:10:26:04 And honestly, there's so much low hanging fruit in the generative AI world right now. No, we should be going after that and not the really, really high risk use cases that we just don't have complete visibility into all the ways that we could potentially constrain them. Yet that's my feeling on the matter.


00:10:41:27 Very helpful. And Ilana, you mentioned regulators, and I want to talk a little bit about that. And pharma life sciences, already a highly regulated industry, but this new space of generative AI is bringing in a lot of regulatory attention. There's speculation that we may see a major piece of legislation in the US that regulates the use of AI next year.

00:11:04:21 And obviously pharma life sciences operates globally and different countries and continents are adopting different stances to the use of AI and generative AI in particular. So in this uncertain environment where things might get a lot more regulated, how should organizations think about their plans, their priorities, their investments and kind of this having one finger on the fast forward and another one on pause? How do you live in this world?


00:11:33:17 Regulators have in the pharma life sciences space, the FDA in the US is the biggest regulator and they have had a love-hate relationship with the industry in general. But when it comes to AI, including generative AI, we see FDA as being one of the pioneers in this space. FDA has provided very clear guidance on AI that essentially helped the industry adopt some of these AI technologies early on, and we see that continuing to happen.

00:12:01:00 The FDA is going to be in the forefront and the EU regulations that you regulate or if they helped regulators, they tend to follow what the FDA does for the most part. So we see promise there too. But talking about broader regulatory landscape in terms of bills and actions in the Congress, that is something to watch out for. What Congress is expected to do or, you know, hoping to do over the next few years will define not just how we adopt AI in pharma life sciences, but broader across the industry and across the country.

00:12:31:00 So, a lot of actions happening in the regulatory space. At this point, we see the FDA as an ally. They're willing to listen, they're willing to understand, they're willing to partner in the evolution of this technology. But the Congress, the bills, not just in the US, the EU AI Act and so on which Ilana can talk about, that continues to be something that we should monitor.


00:12:52:11 And Sid maybe to piggyback off of that, there are a few key themes that seem to be driving some of the regulatory activity. One is a risk-based approach to governance. That's something that the European Union is driving with the EU AI Act, but it's also a framework that's been advanced through NIST in the AI risk management framework that they've been circulating and revising and will continue to revise, meaning that higher risk systems should be treated with more degrees of requirements and really treating AI systems as socio technical systems.

00:13:22:15 So not individual applications, but recognizing that those application reside within a broader ecosystem where people are impacted both in the intake of data and then how the outcomes of models are effectively used. So socio technical system and risk based approach is one major thread. The second one is privacy. There are a number of US states that have been advancing specific privacy laws and they'll continue to do that.

00:13:46:21 We obviously have the European GDPR framework as well and all of those additional sensitivities that's layered on top of patient data with HIPAA and other related regulation. All those activities are continuing to advance at a very rapid clip, and many of the recent privacy laws have included specific provisions around how that data will be used and AI systems.

00:14:07:03 So those regulators are incredibly aggressive at forward looking where which data will be collected and what data we can use to serve, which types of decisions. Related to that you have agencies like the FTC, which I know is not necessarily specific in the health care space, but broader around consumer protection. That has been incredibly vocal about their ability to govern AI under existing frameworks.

00:14:29:18 So they have been going after a number of organizations that they think have improperly used data to feed AI systems, and a number of those will be in the mobile application space, which does have an overlap with work that we do on the health care side. So that is another trend. The privacy side really looking at the data that we're trying to use and for what purpose.

00:14:48:08 Another major theme is bias and discrimination. A lot of systems are used to make some type of a determination whether or not you get a loan or what type of pricing you'll get in a certain policy or even potentially what prioritization you'll get from a care perspective within a hospital system or which types of drugs get prioritized for clinical trials.

00:15:09:21 So there are a number of ways in which AI systems are being used to drive specific more discriminative decisions, which if they impact specific individuals, though regulators have put a strong focus on those types of laws to ensure that they aren’t progressing any type of discriminatory practices that we would view as inconsistent with what we actually need in today's society.

00:15:31:17 So that is another big area of focus. And you see that not just in the health care space but all over. New York City has a law that they put into effect earlier this year that requires all employers in the city of New York to undergo independent bias audits of their algorithmic decisioning systems that impact who is ultimately hired or reviewed for hiring in their organizations.

00:15:53:23 So anybody who's hiring in the city of New York now has to comply with that law. These are major themes that will continue. But the last thing I'll say is that many regulators recognize that they don't have a complete vision for how organizations are using AI today or how they will want to use AI in the future. So a number of them have released RFIs or working groups are asking for public comment.

00:16:16:23 And I think it is our responsibility as enterprises to see how we want to contribute to some of those open forums for comment. Regulators don't know what they don't know, so if we don't help them understand, they will progress legislation that is inconsistent with what we can comply to. And so, we need to engage with them if they're asking for our comments. And there are a number of ways to do that across a number of industries today.


00:16:38:18 Very, very helpful. Ilana, you talked about the responsibility of enterprises and the responsibility of the leaders of those enterprises. And it seems like at the very least that these leaders have a responsibility to be informed about generative AI, about its possibilities, about its risks, and then to have a perspective and to lead the organization in a particular direction and bring their teams along.

00:17:02:21 And you are both doing this was organizations out there as well as was PwC itself and its tens of thousands of employees and helping us upskill. So for leaders of health organizations, pharma and life sciences organizations out there, how should they be thinking about developing their own understanding in the space and bringing the rest of the organization along and upskilling everyone, informing everyone, arming everyone for the future?


00:17:33:23 This is a discussion we consistently have with leaders in the industry, given the newfound accessibility of generative AI, right? Like the fact that you can talk to an AI in natural language, and it can help you do your daily tasks better is tremendous from an adoption point of view. But with that, adoption comes risks. And when we started this journey, if you take a step back, at least in pharma life sciences late last year, as ChatGPT was catching fire, the initial reaction of pharma companies was just to block access.

00:18:05:03 Like, let's just block access to the generative AI technology and not let anyone do it. But then over time, over a few weeks, we saw an evolution in the thinking because leaders quickly realize that this is a technology that you cannot control. This is not something that you can say no, people will not have access to. Which then led to what I thought was pretty cool.

00:18:24:28 I think it's pretty cool that people have started focusing more on education and training around the technology versus blocking it outright. People are now focused more on making sure employees, including leaders, understand what this technology is, how it can help them, and what are some of the risks. So there's a very clear discussion happening in pharma life sciences companies, especially in the scientific clinical space.

00:18:50:17 People are very clear about what this can do now and one of the potentials in the future and what the risks they should avoid today. So there's a lot of training, a lot of education happening. And I also see a good benefit of this is empowerment of employees. So that's pretty positive. And we see that happening not just at the employee level.

00:19:09:10 This goes all the way to the C-suite level. We have had several discussions with industry leaders where the board of directors is asking questions around generative AI. What does it mean for the company today, what does it mean for the company five years from now, ten years from now? So those questions are also being answered in terms of education and education as a big part of answering that question, right?

00:19:29:18 Making sure they understand what the potential of this technology is and what's feasible now versus in the future. So we are seeing education, knowledge, awareness happening at all levels in the company, all the way from the board of directors, the C-suite down to the employee level, which is also driving adoption.


00:19:46:16 Maybe a few things to add to that Sid, because I think you're spot on in terms of the different types of training we've seen organizations undertake. I've seen a few of our threads evolving. So one, tackle your point about employees. I firmly believe most employees want to be responsible stewards of their organization. So if you give them the appropriate instructions for how to use technology, they'll use it the appropriate way.

00:20:05:24 And you need to reinforce that their variety of means. So we've seen many organizations undertaking training programs to articulate the governance and the requirements and other types of limitations or sensitivities. Don't do silly things on the Internet type of guidance to give to their employees just so that there's consistency across the organization. But it is also an opportunity to take a step back as an enterprise and think about what the broader strategy for AI is entirely.

00:20:33:10 What is the vision for using AI? Where do we think that it will have impact? I don't mean specifically generative AI. I mean like all of AI because it is really difficult to differentiate at the end of the day, if something is truly only generative AI or only a simple traditional AI, right? And defining an enterprise strategy allows the organization and their enterprise strategy should be consistent as well with compliance and risk and other type of legal considerations, but also business value.

00:20:59:09 And where we see opportunity and where we want to invest our time and our resources. And if we can do that effectively, then this is an opportunity to educate the broader enterprise on what that strategy is, where we're actually trying to go, where this ship is moving toward if you want to say it that way. And you'll find that if we do that appropriately, employees, our staff are remarkably creative about use cases that will help us drive toward that goal.

00:21:22:16 So it's an opportunity to identify and distill additional use cases and additional nuggets of value that exist across the enterprise and allow teams to see how they and their day-to-day work can inform or influence that broader strategy, which I think is quite powerful.


00:21:37:19 Very, very helpful. So I want to ask you guys about trends. I want to ask you about what you're excited about that's happening right now, but I'm worried that you're going to talk about one of the 10,000 press releases about generative AI that we saw this week. And then this episode will essentially be out of date before it is even released out there.

00:21:57:10 So my challenge to you is to talk about the changes that you see and the trends that you see that are exciting to you. But about the more fundamental ones, the more they may not get as much attention, but they are reshaping the industry. And maybe there's something new in data science or in how we think about jobs of the future, how we think about the use of technology or new business models that may arise in the future.

00:22:21:27 So I know this is tricky but help me and the listeners. Tell us what you're excited about that is a more fundamental tectonic shift as opposed to just the 10,000 press releases.


00:22:33:16 So let me take a crack at this. It's not just trends. These are the things that I personally am really excited about. The first thing is around use cases and applications. I'm seeing a lot of great stories around applications of AI, including generative AI to use cases in the life sciences space. And I talked about clinical trials. How can I make my clinical trials designed better?

00:22:55:04 How can I use data that is unlocked in my company's data warehouse, in PDF documents, word documents, how can I unlock it more easily and apply it to trial design solutions? Trial execution solutions? That's a big area. We are seeing a lot of use cases in the discovery space, like drug discovery space, wherein companies are using technologies such as generative AI to unlock more insights from all the data they had stored.

00:23:20:06 And it's not just about unlocking insights, and this is where generative AI becomes pretty cool. It can take the creative license to come up with new ways of approaching drug discovery that augments human beats. It's almost like having a companion scientist working with you, trying to come up with alternate formulations, alternate pathways of discovery.

00:23:42:05 So that's a big area that I'm personally excited about. The applications of generative AI across pharma, life sciences, all the way from Discovery, clinical supply chain, marketing, you name it, their applications and companies are adopting it. So that's pretty cool. The second thing that I'm seeing from a trend perspective is the slow but steady realization that this technology is going to have a big impact on the way the workforce is structured, the way the workforce is set up over the next three to five years.

00:24:09:18 That is a realization that's happening, which is great. I see this not just as a workforce change, but it's also an organization model, operating model change that's going to happen over the next few years as organizations realize that they can have a digital FTE and equivalent of a digital worker to augment your existing workforce to drive higher efficiencies. This does not mean that you would not need humans.

00:24:32:05 You would still definitely need your regular employees. It's just that they may be able to focus on a higher value-added work while the AI takes on some of the transactional, mundane aspects of their daily work. And the third trend that I'm seeing is both from a technology perspective and being an engineer by training. This is super exciting.

00:24:50:23 What I expect to see is over the next 6 to 12 months, at least in the pharma life sciences space, we will see more domain specific models, models that are trained on life sciences, health care, domain data that you can apply directly to your use case. That's a big trend. I expect models to get more accurate, more autonomous on their own, more agent-based frameworks, more autonomous models, and also multi models like being able to do image, text, voice. That's a big thing that's coming up.

00:25:17:21 And the last thing is scaling. I do expect that over the next 6 to 12 months you will see a lot of generative AI solutions at scale and for that to happen, we need to fine tune a couple of fundamental areas around data access, making sure that the infrastructure scales, the models are able to scale, able to govern the models.

00:25:38:13 So those areas are still being worked on. I expect those to be done in the next few months. So scaling is another big trend I'm seeing. So use cases impact on workforce and operating model and the technology itself evolving to becoming smarter and scaling up are the three broad areas that I'm focused on. Ilana, what are you seeing?


00:25:57:06 I echo all of yours, especially the shift to more specific use cases, because I think that that's where the real value will come moving from a general purpose types of models to something that's a bit more tailored to specific industries or specific functions at specific points in time to have these use cases have the utility that we really need out of them.

00:26:16:07 But to others that I personally am excited about and these might seem very boring in contrast with the exciting ones that Sid is talking about. One is that the excitement around generative AI has done an amazing thing to open everyone's minds into how they can use data. Period. Which means that we now have more use cases for basic AI or traditional AI, but also just dash boarding and robotics.

00:26:42:06 And I think that the value that we can get out of implementing those capabilities and complement with one another is really substantial. We've been hunting for AI use cases for so long trying to prove that there is a value. We've come over this hurdle now significantly where now you don't have to prove to people that there's value to pursue AI capabilities.

00:27:00:27 We just have to find the right technology for the right problem. And I think that that is going to have a big impact for broader digital transformation efforts. It also does give arguments to some of the less interesting activities that organization itself to undertake all of your data, platforming and transformation work, for instance, all of the infrastructure work that's to go into making these types of systems effective still has to happen.

00:27:22:13 But now we have more attention on why that's a necessity. So core capabilities that businesses need in order to get the value out of these systems I think is another big trend we see. But the third one, also somewhat on the more practical side, I think, is that you can't build everything. And I think there's now a recognition that organizations can't go in 3000 different directions at the same time.

00:27:42:10 They have to prioritize their efforts. They have limited resources from a people standpoint, from a compute standpoint, financially as well. So funneling attention through some type of an intake process to prioritize efforts and find higher value use cases that are less resource intensive are going to be areas that organizations I think will end up focusing on in the near future, just so that they can keep moving. Otherwise, we'll be spending in a lot of different directions.


00:28:10:01 Couldn't agree more, Ilana, on everything that you said. And as we keep saying, there's definitely some amount of hype associated with the technology, but we are seeing organized, especially in the life sciences space, realize the importance and the gravity of the situation. Every single pharma company that we talk to, as we talk more to them, they understand more about the technology, and they realize that this is going to have a fundamental change in the way they operate. So they're taking it very seriously.


00:28:36:12 That's very profound and let's Ilana, we'll give you the last word. And let's conclude with an example, because example makes things more real. And for us on this recording, nothing's more real than the transformation that our own company that PwC is going through. With respect to Gen AI and Ilana, you're leading a lot of those efforts to bring us into the AI age. So tell us the story of PwC's journey in the space.


00:29:04:20 I've been quite proud of our firm just to be quite candid, because we moved very quickly early on with a recognition that we needed to invest in this space, but in a methodical manner. Several months ago, we had a big press release that highlighted our massive investment in the space, $1,000,000,000 in the space of generative AI just in the U.S. firm, which if you look globally, is even a larger number.

00:29:26:01 And the team that has been pulled together to support those efforts extends all of our different business units, all of the different practices that we serve, all the different clients that we serve, so that we can move methodically in building capabilities for ourselves and for our clients. We are a professional services firm. At the end of the day, we're people business and so we are really likely to be disrupted by some of these technologies.

00:29:50:08 So we need to think about how we incorporate them in a meaningful way that's consistent with our obligations to our clients, and that's been core to our mission as a team in terms of which capabilities we build. One of the investments that we made early on was establishing a generative AI factory model, and this factory is amassing resources from different spaces across the enterprise.

00:30:11:21 So we have data scientists, model engineers, we have project managers, we have business analysts, we have people focused on risk and responsible AI all working together to progress different use cases. And what we have found is that through the opening of our intake process to a wide variety of use cases across the firm, a lot of people have really great ideas.

00:30:31:19 We found that there are a lot of commonalities in terms of what we build, and those commonalities can allow us to build more templated capabilities, meaning that we have teams now that are specialized in specific tasks like summarization. We all have to read lots of big documents and try and summarize them, pull out the five key themes or other types of points.

00:30:50:09 Or maybe we need a QA capability where we want to upload some type of content and ask it questions to better understand what it's trying to tell us or what content is contained within that, or provide better visualization capabilities on top of data. All of those have specific patterns, and so organizing our teams that are suited to building repeatable capabilities on top of those types of patterns has been a way that we've organized our own factory structure.

00:31:16:01 It's allowed us to scale a large number of use cases really quickly, also allowed us to learn hard lessons very quickly and so that we can move on and identify where there is or reframe our thinking about where there is opportunity in the generative AI space. That's been critical from my perspective, for helping us think about where there's value for our clients too, because we test all the stuff on ourselves.

00:31:38:03 So by the time we have a conversation with an organization about it, we've usually already been down the pathway in something very similar and we can bring some practical experience.

00:31:46:03 I just think we're so early on in the generative AI world that we are all learning from one another and having a capability internally within our organization to stress test as well has been beneficial for me at least in thinking about what's realistic in the market.

00:32:00:02 And then as part of this capability too, we've also instead mentioned this earlier. We rolled out a massive training program that is required for all of our US staff that we have in the US firm to complete, which covers not just what is generative AI, but also our firm's perspective on where there is value for generative AI, where we see use cases and also where our firm sees risks and how we as appropriate stewards of our organization can utilize this technology in a way that's consistent with the rigid standards that we have as an enterprise.

00:32:29:29 So all of that together has been a really interesting journey and I really think we're just getting started. That's what's so exciting to me. There's so much more work we can do.


00:32:39:00 Very, very cool. And thanks for sharing this story with us and thanks, Ilana and Sid for being with us and having this tremendous conversation. Sid mentioned earlier, augmented humans and this human’s knowledge certainly had been augmented through the conversations so, really, appreciate it.

00:32:55:00 For more on these topics and other health industry insights driven by policy, innovation and care delivery changes, please be sure to subscribe to our podcast so that you get the future episodes as well as check out the classics. Until next time, this has been Next in Health.


00:33:17:04 This podcast is brought to you by PwC. All rights reserved. PwC refers to the U.S. member firm or one of its subsidiaries or affiliates and may sometimes refer to the PwC network. Each member firm is a separate legal entity. Please see www.pwc.com/structure for further details. This podcast is for general information purposes only and should not be used as a substitute for consultation with professional advisors.

Contact us

Jennifer Colapietro

Jennifer Colapietro

Cloud & Digital Leader, PwC US

Follow us