Is privacy possible in the age of AI?

Business leaders are at a crossroads with generative AI—grappling with its vast potential while untangling intricacies of privacy and trust. Join our hosts Lizzie and Ayesha as they facilitate a rich dialogue with two experts working on the leading edge of AI. Keith Enright, Chief Privacy Officer of Google, sheds light on the complexities of managing privacy in an AI-driven ecosystem. And Mona de Boer, Data and Technology Partner at PwC Netherlands, outlines the AI regulatory landscape and addresses both risks and opportunities facing companies that use GenAI tools. 

Keith Enright: We’re not going to stop the technology; forward progress is going to continue. We’re going to need to approach it with humility, just as we had to in sort of some of these other areas where privacy butts up against other important values. I think AI/ML [artificial intelligence/machine learning] is going to create a lot of those conversations in coming years.

Mona de Boer: Businesses that get this right will be successful in the broadest sense: they will be successful in the impact they make on society, to their customers, other stakeholders. They will also be successful as a business.

Keith: Any forward progress in technology involves some rational assumption of risk. And the way Google is thinking about that is, OK, how do we proceed both boldly and responsibly, because the promise of these technologies is just so overwhelming?

Ayesha Hazarika: From PwC’s management publication, strategy and business, this is Take on Tomorrow, the podcast that brings together experts from around the globe to figure out what business could and should be doing to tackle some of the biggest issues facing the world. I’m Ayesha Hazarika, a broadcaster and writer in London.

Lizzie O’Leary: And I’m Lizzie O’Leary, a podcaster and journalist in New York.

Ayesha: Today, we’re talking about how generative AI is changing the world and raising fresh questions about privacy.

Lizzie: How much do we really understand about how it all works? And what opportunities are there for companies that get it right?

Ayesha: To answer those questions, and more, we talk to Keith Enright, Chief Privacy Officer at Google, who shares his unique perspective on leading an organization that’s a key player in the AI race.

Lizzie: But first, we’re joined by Mona de Boer, Data and Technology Partner at PwC Netherlands, who advises organizations on the application of data and technology. Mona, welcome to the show.

Mona: Thank you for having me.

Lizzie: When it comes to your conversations with your clients around generative AI and fast-moving innovation that comes with it, what are they most worried and most excited about?

Mona: A lot of organizations see huge potential when it comes to generative AI and immediately improving products and services, and also the experience that they deliver to their customers. At the same time, also, I think what is most top of mind currently is safety aspects around data, and not only of natural persons, but also a protection of business data, so, intellectual property. And also, I would say ethics, in the boundaries of how you can treat data. Also, what is accepted by customers and other stakeholders.

Ayesha: Now, we live in this age where so often ordinary people are publishing all kinds of personal information about themselves on social media. Do you think people still care about privacy?

Mona: Yeah. I do definitely think that’s still very important. Different people have obviously different perspectives on privacy. And I also do see something you could call, like, a generational difference. People that are, sort of, digital natives tend to look differently to privacy. At the same time, I do think it’s also important that people are able to make choices about what benefits are they willing to trade off of their data. But then it’s important that people are educated around what to pay attention to when you do so.

Ayesha: And just to follow up on that, if privacy is still important, do you think businesses are listening and incorporating this into their products and services, including how they think about AI?

Mona: Businesses are definitely trying, but I do think, even with the best intentions, I do think this is hard, because one of the core ingredients of AI systems is data. To really develop a system that people find useful, you need data as an ingredient.

Lizzie: When you think about this in your job, how do you combine that need for data with ethics and advise your clients?

Mona: Part of it is obviously what is increasingly captured in regulations, rules of play for businesses—you could say, around what is acceptable from a societal perspective, not only choices by the businesses but also by their customers. I call it a two-sided story. So I do think it is necessary to be transparent as a business towards consumers—what data you need as a business to develop certain products and services, and what benefits that brings. Currently, in the public debate, I read a lot about the risk side of this. But I read too little about what benefits are posed against that and what choice can consumers then make. What I see as important for businesses going forward to be both innovative and respective of privacy is being more transparent and communicative towards their consumers. And an important part of trust will also come from that transparency and communication of businesses.

Ayesha: Mona, we’ll come back to you in a few minutes to unpack some of the broader risks and opportunities around working with AI. First, though, Lizzie, you spoke to Keith Enright, who is Chief Privacy Officer at Google. Tell us a wee bit about that.

Lizzie: Yeah, that’s right. Keith has over 20 years of senior executive experience in data privacy and protection, and I was really curious to find out whether and how privacy concerns change in the age of generative AI.

Keith, when we think about technology and privacy—you’ve been doing this since the ’90s, and so much has changed in that time. I wonder, first, if you could just lay out what generative AI is for people who don’t fully understand it.

Keith: Sure. I mean, I am happy to give it a good old college try. Very, very broadly speaking, when I think about AI, as I think about it in my own practice, my work at Google, its relationship with data protection, I’m looking at these bundles of technologies that are capable of amplifying the creative potential of human beings.

Lizzie: How do you think about privacy in the context of generative AI? Because those are two huge concepts, and yet they’re really interwoven.

Keith: You need data for nearly everything that anyone does. So, yes, we need to be thoughtful about how we are handling and processing data. So we’re doing it responsibly. We’re always doing in the best interest of the user. If you look at right now, we’re at this incredible inflection point, where it really feels like, even though Google has been talking about AI/ML for the last decade, we’ve got [Google CEO] Sundar [Pichai] on the record for many years saying he believes that AI/ML is going to have a more profound impact on human civilization than the discovery of electricity or fire. Right? Like, that really does drive home what a big deal we think…

Lizzie: …pretty big-ish…

Keith: …right? When you say things like that, you fully anticipate. And in fact, you are inviting those conversations with policymakers. You’re saying, we’re doing something that we think is important and meaningful here. And we wanna have a dialogue. We wanna make sure that we’re doing it in a responsible way. Now, how that interfaces with the privacy and data protection people like myself is, we’ve spent 20 years trying to do precisely that in the context of protecting user privacy. Many of the lessons and skills that we learned over those 20 years—how do you deal with rapidly changing technologies? How do you get regulators and policymakers sophisticated on those technologies as they’re changing? All of those challenges, all of those lessons are useful in the context of AI/ML. I would say that we are anticipating that the challenge for AI/ML is it’s moving even faster. And the potential benefits, the upside of getting it right, is even greater.

Lizzie: I want to start digging into a a kind of fundamental thing, though. If you are a regular old user, how should you think about your data—whether it’s from an online post, a photo, whatever—being scraped for a large language model?

Keith: Every user is different. I think we tend to underestimate user sophistication. I do believe that organizations have a responsibility to do the best we can to communicate: what are we doing? How are we processing information? How is it being used to generate value for the user and to be, to generate value generally, and then create really powerful controls that users can interact with. And then one metaphor that we’ve used a bunch of times over at Google is, like, make those controls a well-lit room, have a meaningful default set for users, all these things. Like, we’ve been working on this for a long time. We’re never gonna stop.

Lizzie: So that you always have to opt in, so that you know exactly what you’re clicking on, those kinds of policies?

Keith: Yeah. So, like, let’s talk about that for a second. The responsibility for organizations like Google really has to be, how do we invest in giving the right information to the right user at the right time so that they’re in control, and that the services are delivering the value that they expect in a way that they’re comfortable with? And that’s a much harder thing to solve for than just making everything opt-in. Because if everything was opt-in, you’re pushing all of the onus onto the user to presumably read thousands of pages of technical explanation about all the things that are happening in the course of using a technically complicated service. And you’re forcing them to do things that they have neither the time nor the willingness to do on a daily basis.

Lizzie: So, you’ve sort of led me to a question about how Google has arrived at some of these processes and policies around privacy. You joined Google just as it was shutting down Google Buzz. For people who don’t remember Google Buzz, it was sort of a short-lived social networking tool, but it had a default opt-in. And that led to a bit of a dustup between Google and the Federal Trade Commission, because some user information was exposed to people’s followers. And I wonder what the kind of learning experience was after Google Buzz.

Keith: I have not had a conversation about Buzz in many years. So I appreciate your question, because it’s a good one. As you say, we launched a, we can call it a failed social networking product. And part of the reason it failed was Google still operated in some respects like a startup. Before we would release a product to the public, we would release it internally to Googlers. And they would kick the tires, try to use the thing, incorporate into their daily lives, and then provide feedback. One of the lessons learned from Google Buzz was, it seems obvious in hindsight, but at the time, we were sort of presuming and extrapolating that all of the incredibly diverse and interesting and varied users in the world all over the planet were going to behave and interact with a product and have expectations of a product that were somehow analogous to those of a Google engineer sitting in Mountain View, California.

Lizzie: Hmm.

Keith: Turned out that wasn’t quite right—sort of, the entrepreneurial nature that users engage with the product. They stress-tested in ways that didn’t surface really well in our internal testing. We became more sophisticated in thinking about privacy controls in the same way we think about our other products. And that we might have certain ideas about how they should work, but we actually need to engage with the community. We need to engage with regulators and policymakers, approach it both with more thoughtfulness, but also with more humility—not presume that we understood the right way to do it. That there were a lot more perspectives to bake into it.

Lizzie: So let’s say Google is creating and launching a new product. In 2023, where does privacy come in to that chain of decisions? Is it baked into the cookies at the beginning, or does it get sprinkled on at the end? Like, how does that work?

Keith: Whenever an engineer or a product manager anywhere at Google has an idea for a product or a feature, they need to engage with the privacy elements of this launch process. That then goes through a very mature sort of programmatic review with subject matter experts on whatever domains might be implicated. And then you really think about privacy as a life cycle component. So, all the way through, from the moment that product is ideated in a conference room somewhere, all the way through to the moment that potentially we decide to sunset that product or migrate to a different product.

Lizzie: This might sound like a very basic question, but I wonder, is there a difference between general thinking about privacy now and thinking about privacy when it comes to generative AI?

Keith: So, no. I don’t think that people are thinking about privacy differently. What I will say is, you’re not going to get privacy unequivocally right ever in a way that is not going to create a conversation about trade-offs, compromises, and exchanges for AI/ML going forward. The benefits of these technologies are going to be so overwhelming, their disruptive impact on so many aspects of people’s lives is going to be so significant, we’re not going to stop the technology; forward progress is going to continue. We’re going to need to approach it with humility, just as we had to in sort of some of these other areas where privacy butts up against other important values. I think AI/ML is going to create a lot of those conversations in coming years.  

Lizzie: So you’re leading me to to talk about regulation. Does this make it harder to be a multinational company, because you are not only engaging with the US regulatory environment, but Europe has had a lot of conversations about privacy that the US does not have on a federal level?

Keith: One of the things that I would say makes my role at Google—and I would say many roles at Google—both among the most challenging and the most rewarding that I could imagine is exactly what you just described. When I began engaging with privacy regulators around the world, I went in with hubris, with this presumption they just didn’t understand our technology sufficiently well, they didn’t really understand the policy objectives. I could not have been more wrong. So, the fact that regulators taught me humility, and the fact that policymakers taught me humility in my first few years with the company has served me incredibly well. Regulators have very complicated things they’re trying to balance. That makes it really difficult, and it can slow forward progress. But it’s also, we’re not trying to solve a problem for an individual company in the United States of America. We are actually trying to help shape and inform policy that’s gonna deliver economic benefit, economic growth, strong user protections, keeping users safe whenever they are using technology anywhere in the world.

Lizzie: In terms of what you do, it is remarkably complex, and it happens at a fairly astonishing speed, especially when we’re talking about AI/ML. So how do you plan for things you can’t see coming, knowing that they are out there, and then knowing that your products will probably be giving users, you know, the chance to work with things that maybe they can’t fully comprehend?

Keith: There is something unique and different about AI. It’s great to start with principles. We were one of the first companies in the world to articulate our AI principles publicly. Build out internal governance structures and controls. People should be accountable, processes should be documented and consistently applied. They should be auditable later on, so that if something appears to not be operating in the way that was expected, we can figure out why and lean in. We may not be able to fully explain all of the inner workings of the technology. But we can make sure that we are building the structures of the people that are building the technology so that they feel accountable and that that’s guiding their work, and we can demonstrate that kind of accountability. I don’t think there is a clear answer on this one. I think, yes, any forward progress in technology involves some rational assumption of risk. And the way Google is thinking about that is OK, what do we know? What don’t we know? And how do we proceed both boldly and responsibly, because the promise of these technologies is just so overwhelming?

Lizzie: Do you think about privacy and AI and have those two in the morning thoughts? Like, what keeps you up at night?

Keith: I think about privacy and AI every day. I have a responsibility to our billions of users all over the world to be hyper-fixated on that. The privacy issues with AI are not keeping me awake at night. I think they are manageable. I think they’re novel. The technology will create new challenges. Every technology has. We will rise to the occasion, and we will mitigate those risks. The pure privacy risks—I am concerned about the tension between the imperative of delivering the benefits of these technologies to deal with global warming or to improve better public health outcomes for people all over the world, extend human life. Like, do these incredible things that are right in front of us, but doing it in a way that, you know, we don’t go sideways or make some sort of mistake along the way that creates a regulatory imperative to sort of slow us down or disrupt the technology itself. That keeps me awake right now.

Lizzie: Keith Enright, thank you so much for talking with me.

Keith: Yeah, it’s a pleasure. Thank you so much.

Ayesha: We’re here with Mona de Boer. Mona, Google is a large corporation, which has a deep focus on AI, but for businesses that aren’t in the AI field or even in the tech sector, how can they make the most of AI?

Mona: I think one of the biggest changes of the past, yeah, months that we’ve seen is that it is no longer necessary to have a really heavy tech function as a business to be able to benefit from using AI and making it part of your product services and the way you interact with your customers. The sort of obstacles that were maybe around a while ago, that has definitely become easier for businesses to start using and integrating AI.

Lizzie: And if companies are doing that and leveraging generative AI, how should they be thinking about privacy specifically?

Mona: I always say it’s like building a house. When you use a foundation model or a large language model, for example, you have, sort of, the first two floors of the house. You get them from somewhere else, and then you can build on that as an organization. And the alternative is that you build all the floors yourself. So these are two routes to do good things with AI. But the risks don’t change, if you ask me. So that means that when organizations are using foundation models and generative AI, what kind of information do you make part of your prompts, is evenly sensitive to data privacy, and loss of IP, and ethical risks.

Ayesha: And, Mona, what kind of questions are you getting from your clients when it comes to AI?

Mona: What you currently see is that many businesses are sort of moving away from the first buzz around generative AI. So they are now, really, entering a phase where they are really wanting to see the exact potential and wanting to translate that into use cases that will show their benefits in one or two years. On the risk side, I see a lot of attention. So, organizations seeing that generative AI is increasingly used in their business, employees just bringing it in the organization, and then wondering what kind of do’s and don’ts should we formulate around that. Also, how can we stimulate innovation? So, make sure that risks are appropriately addressed, and at the same time that we don’t lose the pace in innovation.

Lizzie: Well, let’s talk about a specific balance question: the idea that more data is always better. Is there a trade-off, do you think, an inevitable one, between performance, the more data to power a generative AI application, and privacy, the protections around that data?

Mona: When you’re innovating, you don’t wanna restrict yourself immediately. Right? So there is where the tension is. So in general, the more data, the more possibilities you have to do innovative things. At the same time, I do believe that in the next years, we will see more and more approaches where we find other ways to deal with that tension. So, for example, using synthetic data sometimes is very currently looked into. So then, making sure that privacy is not an issue, but at the same time being able to really develop high performance models. Also, a lot of investigation is going into proxy. So how can you, with a couple of data fields, come to a same or even better performance? This is still an area very much in movement. But there are methods in development that will address this specific issue.

Ayesha: Now, Mona, Keith was very positive about the idea of regulation, but he was also really clear that there’s a long way to go on constructing appropriate and effective guardrails. What do you think about current regulation?

Mona: I think this is very necessary. We have a very powerful instrument in our hands. It has a lot of potential, sometimes beyond imagination. And then I think it’s really sensible to say, OK, but what, how do we want this instrument to be used? Also to protect people, protect our environment? So, yeah, I’m positive. I think this is necessary. The challenge is the how. How do you translate these objectives into the day-to-day processes of organizations? There is a huge gap there, and I also think that it will take certainly a couple of years before we have a stable situation around that.

Lizzie: Mona, we are talking about such new territory. So there’s gonna be, sort of, a certain amount that we cannot foresee. But I wonder how the use of AI, and specifically generative AI, opens businesses to risk.

Mona: People are using these open-source tools from a basis of trust, and that’s not bad. I do think it’s necessary for businesses to educate themselves, not only on the potential of the tools but also on what could go wrong when these tools are used. And also, what are the limitations of these tools? I’m really impressed with what’s happened the past months. I think these systems are incredible. At the same time, they do have limitations, and understanding those limitations is also part of addressing the risks for businesses.

Lizzie: Mona, what are the opportunities for the businesses that do get this right?

Mona: Businesses that get this right will be successful in the broadest sense: they will be successful in the impact they make on society, to their customers, other stakeholders. They will also be successful as a business. I think one of the things that has changed is consumers have become very outspoken. When businesses do things that consumers don’t like or don’t trust, the voice of the consumer has become very loud and clear. So I do believe that businesses that are really able to bring that innovation but at the same time do that in a way that is transparent and gives their consumer choices, they will be successful in the broadest sense in their respective markets.

Lizzie: Mona de Boer, thank you so much for joining us on Take on Tomorrow.

Mona: Thank you so much.

Ayesha: So, what an interesting set of conversations we’ve just had, Lizzie. My brain is sort of fizzing. What are your takeaways?

Lizzie: Well, I was struck by how positive Mona was about the opportunities for business. You know, I spend a lot of time thinking about generative AI and data models, and she seemed much more sanguine about the opportunities for businesses, particularly ones that deal with a lot of data to streamline their operations. How about you?

Ayesha: It was great to hear Mona really explain that actually data’s the secret ingredient to making AI work. But you can be responsible with that data, and that’s about transparency and good communication. And I think the other thing that I’ve really taken away is both from your conversation with Keith and our chat with Mona, is that they make the point that AI can be a brilliant, really powerful tool, but it has to be used in the right way. And they’re both really keen to see good, informed, intelligent legislation and regulation. I thought it was really interesting.

Lizzie: And listeners, if you’re wondering how businesses are approaching the risks and opportunities of generative AI and other innovations in our world, PwC has spoken to over 3,700 business leaders. Keep an eye out for the 2023 Global Risk Survey, coming soon.

Ayesha: That’s it for this episode. Join us next time, when we’ll be recording live at the APEC [Asia-Pacific Economic Cooperation] CEO summit in San Francisco, billed as the most influential meeting of business and government in the Asia-Pacific region. Take on Tomorrow is brought to you by PwC’s strategy and business. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity.

Back to top


Hosts

Lizzie O'Leary

Lizzie O’Leary
Podcaster and journalist

Ayesha Hazarika

Ayesha Hazarika
Broadcaster and writer

Guests

Keith Enright Chief Privacy Officer, Google

Keith Enright
Chief Privacy Officer, Google

Mona de Boer Data and Technology Partner, PwC Netherlands

Mona de Boer
Data and Technology Partner, PwC Netherlands

Explore further

Seven crucial actions for managing AI risks

Executives need to give higher priority to the fast-evolving risks of generative AI. They can start with a few key trust-building actions.

See the list

PwC’s Global Risk Survey 2023

Find out how technology is changing the way leading organizations see risk—and how those changes can create new value and build resilience.

 Read more

The secret to accelerating performance

This edition of strategy+business explores the factors that set top companies apart from the rest.

Check it out­

All episodes in the series

Required fields are marked with an asterisk(*)

By submitting your email address, you acknowledge that you have read the Privacy Statement and that you consent to our processing data in accordance with the Privacy Statement (including international transfers). If you change your mind at any time about wishing to receive the information from us, you can send us an email message using the Contact Us page.

Contact us

Matthew Wetmore

Matthew Wetmore

Global Industries & Sectors Leader and National Managing Partner, Clients & Markets, PwC Canada

Tel: +1 403 509 7483

Mona de Boer

Mona de Boer

Responsible Artificial Intelligence Leader, PwC Netherlands

Tel: +31 0887925516

Hide