Ethics matter: Cultivating a responsible approach to the next generation of AI

Example pattern for mobile
Example pattern for desktop

Summary

  • As artificial intelligence (AI) incorporates emotion into modeling, it will be critical for companies to imagine both the vast potential of new technologies as well as potential unintended consequences. 
  • It is essential to establish a framework for ethical AI before any product or service is designed or built. 
  • PwC's AI Toolkit encompasses six key areas.

It’s common to encounter automotive safety systems that use artificial intelligence (AI) based on objective inputs, such as distance and speed, to avoid crashes. But the next generation of such technologies could employ more subjective inputs, based on in-cabin video and audio, to determine whether a car needs to compensate for a drowsy driver — or even one who’s texting. 

As computers attempt to interpret and predict human behavior, it will be critical for companies to imagine both the vast potential of these technologies as well as their potential unintended consequences. Companies should assess whether they have adequate guardrails in place to maintain trust.

AI gets real

Smart Eye is one company working to usher in the next generation of AI technologies. For Rana el Kaliouby, deputy CEO of Smart Eye and co-founder of Affectiva, the drive to make AI more socially and emotionally intelligent came out of a desire to improve communication in an increasingly digital world. Much of human communication is nonverbal — and subjective. What distinguishes a smile from a smirk, for instance? 

Now that visual and audio sensors are ubiquitous, programming machines to understand subtle facial cues or tonal shifts could be transformative. Earlier this year at PwC’s EmTech Exchange, Smart Eye demonstrated how its Emotion AI software can be used to gauge people’s reactions to advertising as well as to help automotive safety systems detect unsafe conditions based on driver body language and sounds. 

The EmTech Exchange also featured Moxie, a companion robot from Embodied, a robotics and AI company. The robot is designed to engage children in conversation to help promote social-emotional skills. Moxie — which features an emotive face with anime-style eyes — was designed for children ages 5 to 10. Using AI, Moxie invites children to express their feelings to a nonjudgmental listener while offering various games and tasks that aim to improve emotional well-being and social functioning. 

Moxie was conceived to help bridge the gap in pediatric mental health services. The American Psychological Association’s 2022 trends report declared that children’s mental health is in crisis and therapists are in short supply. It cited data from the Centers for Disease Control and Prevention showing that only about 20% of the one in five children known to have a mental disorder received care from a professional — and that was before the pandemic.  

For Embodied, founded in 2016, codifying an ethical AI framework has been a top priority. It’s also an ongoing conversation, as the company fosters internal debate about how to handle different scenarios as they arise. “It’s an evolving theme that requires constant attention,” says Paolo Pirjanian, Embodied founder and CEO.

In considering how to build trust with parents and children, Embodied has focused on several key areas: total data transparency; consent; user security and privacy; respect for a child’s culture, values and beliefs; and choosing to anonymize data. 

Embodied uses anonymous recordings to continually train Moxie’s algorithm, but the system purges personally identifiable information. So, as Moxie converses in real time, learns about the child and adapts its behavior, data remains confidential. While the robot can offer many services, it is not a substitute for a human being. 

“If it flags a potential problem, it will encourage the child to talk to a trusted adult,” Pirjanian says. 

Not surprisingly, the challenges and stakes surrounding ethical AI continue to grow. The natural speech capabilities that Moxie and other digital systems use may reach a human level within a decade. 

That’s not without risks: Systems like Moxie and Affectiva could potentially be commandeered to deceive or manipulate people into behaving in undesirable ways. In addition, this type of data — if not stripped of personally identifiable information — could be used by companies or a government entity to gain deep and private insights into an individual’s attitudes, health and well-being. 

On the other hand, the idea that AI could simulate empathy — and help to de-escalate emotional problems could be profound. As el Kaliouby points out, “What if AI could sense that you're frustrated and instead of it escalating, it could actually apologize? What if it could change tack and react accordingly? What if every device in the Internet of Things had an emotion chip or a mood chip that could understand your emotional state, and again, personalize the experience for you?” 

The ability of machines to read body language and analyze words — and respond appropriately — could revolutionize the way companies and consumers sell, buy and interact.

Responsible AI is about trust

One thing is certain: The opportunities and challenges will likely continue to grow. It is essential to establish a framework for ethical AI before any product or service is designed or built. The guardrails should extend beyond good intentions. Conditions change, and companies get acquired. 

“It’s critical to act within the boundaries of your competence,” Pirjanian says. What’s more, it’s wise to explore ways to decouple — or at least abstract — financial pressures stemming from venture capitalists and shareholders eager to receive high returns, he notes. “This has to be part of the discussion and terms upfront.”

PwC’s AI Toolkit encompasses six key areas that fall under data and policy regulation controls. These are: 

  • Bias and fairness. It’s vital to confirm that each step has been taken — from data training sets to the way an AI system is used — to help reduce the risk of potential bias.
  • Interpretability and explainability of an AI system. How was a decision made? Organizations should avoid a black-box effect by carefully reviewing the way the data was trained and validating the outputs.
  • Privacy. The AI system should protect sensitive data, particularly as consumer attitudes and expectations change and stronger data regulations take shape.
  • Security. Any business using AI should weigh security risks and have a clear understanding of how to lock down highly sensitive data.
  • Robustness. Does AI data behave as intended? Any and all AI systems should demonstrate stability and consistently meet critical performance standards.
  • Safety. An organization should take the necessary steps to help decrease the risk of a direct or indirect negative outcome. This includes preparing for the possibility of a future acquisition.

Building a better AI ethics framework

Establishing an AI framework requires open, honest and challenging internal discussions that straddle philosophical concepts and technical factors. As a result, it’s essential to establish cross-functional teams — data scientists, business analysts, lawyers, privacy advocates and security specialists — to study AI issues and arrive at ethical solutions. It may also require outside advisors or an independent AI ethics board. 

And while technologies such as zero knowledge proofs and verified software validation techniques may also help confirm the transparency and veracity of algorithms — these systems validate their accuracy without revealing the underlying algorithm — they remain in the nascent stages. In the end, it’s up to humans to confirm that AI systems are safe, secure and fair. By designing and building ethical AI upfront, everyone can come out ahead.

Emerging Technology

Unlocking the business potential of emerging technologies.

Learn more

 

Turn data into advantage

Empower yourself with insights that drive better, faster, more confident decisions with an analytics-centered approach.

Learn more

 

Anand Rao

Global AI Lead; US Innovation Lead, Emerging Technology Group, Boston, PwC US

Email

Scott Likens

PwC’s Global Artificial Intelligence Leader and US Trust Technology Leader, PwC US

Email

Next and previous component will go here

Follow us