Understanding algorithmic bias and how to build trust in AI

Example pattern for mobile
Example pattern for desktop

Summary

  • If your AI can’t be trusted, its promise will fall short. That includes making sure AI models aren’t biased against certain groups of people.
  • AI bias is caused by bias in data sets, people designing AI models and those interpreting its results.
  • Part of a responsible AI approach, addressing bias includes establishing governance and controls, diversifying your teams and continual monitoring.

Artificial intelligence (AI) promises to create a better and more equitable world. Left unchecked, however, it could also perpetuate historical inequities. Fortunately, businesses can take measures to mitigate this risk so they can use AI systems — and decision-making software in general — with confidence.

AI promises a dazzling value beyond simple automation. Objective, data-driven and informed decision-making has always been the lure of AI. While that promise is within reach, businesses should proactively consider and mitigate potential risks, including confirming that their software doesn’t result in bias against groups of people.

Enabling our AI systems to be trustworthy has become increasingly urgent. And it’s no longer restricted to the back office. It’s in every area of the business. A quarter of the executives surveyed already report widespread adoption of processes fully enabled by AI. An additional one-third are rolling out more limited use cases. The top three goals for these initiatives include not just the traditional benefits of automation — efficiency and productivity — but also innovation and revenue growth.

How far along companies are with AI
Circular bar chart: To what extent is your company looking to integrate AI technologies into its operations?

AI is spreading ever deeper into business (and the world at large), influencing life-critical decisions such as who gets a job, who gets a loan and what kind of medical treatment a patient receives. That makes the potential risk of biased AI even more significant. The path to managing and mitigating this risk begins with understanding how such bias can occur — and why it can be so difficult to detect.

Why AI becomes biased

The definition of AI bias is straightforward: AI that makes decisions that are systematically unfair to certain groups of people. Several studies have identified the potential for these biases to cause real harm.

A study published by the US Department of Commerce, for example, found that facial recognition AI misidentifies people of color more often than white people. This finding raises concerns that, if used by law enforcement, facial recognition could increase the risk of the police unjustly apprehending people of color. In fact, wrongful arrests due to a mistaken match by facial recognition software have already occurred.

Another study, this one from Georgia Tech, found that self-driving cars guided by AI performed worse at detecting people with dark skin, which could put the lives of dark-skinned pedestrians at risk.

In financial services, several mortgage algorithms have systematically charged Black and Latino borrowers higher interest rates, according to a UC Berkeley study.

Natural language processing (NLP), the branch of AI that helps computers understand and interpret human language, has been found to demonstrate racial, gender and disability bias. Inherent biases such as a low sentiment attached to certain races, higher-paying professions associated with men and negative labelling of disabilities then get propagated into a wide variety of applications, from language translators to resume filtering.

Researchers from the University of Melbourne, for example, published a report demonstrating how algorithms can amplify human gender biases against women. Researchers created an experimental hiring algorithm that mimicked the gender biases of human recruiters, showing how AI models can encode and propagate at scale any biases already existing in our world.

Yet another study by researchers at Stanford found that automated speech recognition systems demonstrate large racial disparities, with voice assistants misidentifying 35% of words from Black users while only misidentifying 19% of words from white users. This makes it difficult for Black users to leverage applications such as virtual assistants, closed captioning, hands-free computing and speech to text, applications that others take for granted.

Additionally, numerous studies in recent years, including one by the UN, have pointed out that virtual assistants with submissive female voices reinforce gender bias in society.

How do such biases enter an emotionless set of algorithms, which run on hard, cold data?

The short answer: People write the algorithms, people choose the data used by algorithms and people decide how to apply the results of the algorithms. Without diverse teams and rigorous testing, it can be too easy for people to let subtle, unconscious biases enter, which AI then automates and perpetuates. That’s why it’s so critical for the data scientists and business leads who develop and instruct AI models to test their programmes to identify problems and potential bias.

Consider the hypothetical example of an algorithm used to decide which patients should receive expensive, continuing care for a chronic disease. The team creating the algorithm decided to base their model on past patterns of approvals for such care. They design the algorithm to match this historical data set. However, in this illustrative example, Latinx patients — some of whom speak English as a second language and have difficulty navigating the US healthcare system — historically only requested and received this care for far more severe cases than non-Latinx whites. Without awareness of this fact and a determination to compensate for it, the algorithm will hypothetically continue to assign this care more rarely to Latinx patients, effectively automating discrimination.

In this hypothetical example, even if none of the authors of the algorithm had any bias, they neglected to evaluate the historical data set to determine if there were problems and if so, to correct them.

 

Biased AI threatens bottom lines

Most companies have programs in place to help fight societal systemic injustice, for example rigorous anti-discrimination policies, programs to recruit diverse talent, training and hotlines to help recognize potential employee bias, employee resource groups to support diverse talent and public pledges such as CEO Action.

Yet all these efforts may be fatally undermined if the AI models used in everyday operations and delivery of work inadvertently discriminate. Biased AI also can lead to poor business decisions. Finally, the reputational hit to a company’s brand from such biased AI could significantly harm sales, recruitment and retention. 

Regulators are ramping up scrutiny: Legislation pending in both Congress and New York City would require companies to examine AI for possible bias.

While only 44% of executives surveyed in our AI Predictions report shared their awareness of increased regulations related to AI ethics and bias, every company needs to prepare for compliance and take proactive steps to mitigate risks of creating inequities. These steps should begin now, because addressing bias in AI models or decision-making software is quite complex and not every compliance department or internal audit team is equipped to manage it.

Why addressing AI bias is so challenging

The fight against AI bias is filled with good intentions. Executives understand the need for responsible AI — that which is ethical, robust, secure, well-governed, compliant and explainable. A full 50% called out responsible AI in our AI Predictions 2021 survey as one of their top three priorities. And while 32% said they will focus on addressing fairness in their AI algorithms this year, over two-thirds aren’t yet taking action to reduce AI bias because it can be a thorny and unusual challenge.

Bar chart titled
Business leaders beginning to focus on mitigating bias and creating responsible AI in 2021
Ensure AI is compliant with applicable regulations
%
Ensure AI-driven decisions are interpretable and easily explainable
%
Develop and report on controls related to AI models and processes
%
Improve governance of AI systems and processes
%
Address the issue of fairness
%
Q: What steps, if any, will your company take in 2021 to develop and deploy AI systems that are responsible, that is, trustworthy, fair, bias-reduced and stable? Form list of 10 choices. Source: PwC 2021 AI Predictions. Base: 1,032

Although there is a general definition of AI bias — one that makes decisions that are systematically unfair to certain groups of people — there’s no universally accepted definition of “systematically unfair.” There are also few standard metrics to measure fairness, leaving each company to reach its own definition of bias. In the absence of standards that apply universally and the diversity of AI usage, each organization should determine what kinds of bias are more likely to skew the algorithms it uses. Companies may also have to assess what would potentially cause the most harm to their employees, customers, communities and business plans.

Another issue is that AI models likely use both new data and historic data, some reaching back decades. Yet the world is constantly evolving. Historical data sets seldom reflect today’s realities. Similarly, AI models trained on today’s data may not perform well in the future. Additionally, the definition of bias is also evolving, so data sets and algorithms that may have minimal bias today may be full of bias tomorrow.

Non-governmental organizations (NGOs), universities and multilateral organizations around the world are working to better define AI bias and lay out principles and guidelines to help mitigate it. A few leading examples include the World Economic Forum, the IEEE, the G7, the OECD and MIT. Yet as even this very partial list shows, there are a lot of competing ideas. The world has yet to reach consensus and it probably never will. Defining and evaluating bias is simply too dependent on each organization’s algorithms and stakeholders.

Fortunately, even amid so much uncertainty, there are some steps that every organization can take right now. Together, they’ll help reduce the potential risks of biased AI to your business — and to society.

Toward trustworthy AI: Five measures to help reduce AI bias

AI has become sophisticated enough to support and make ever more important decisions. That requires rigorous and sustained action to help make it trustworthy, reducing bias and the possible risks such bias can bring. These measures can help:

Identify your unique vulnerabilities.

Banks, retailers and utilities all face different kinds of risks from potential AI bias: where it could creep into data sets and algorithms and where it could cause major damage. Determine your company’s unique vulnerabilities and define bias for your specific AI systems. Calculate the resulting financial, operational and reputational risks. Prioritize and focus your mitigation efforts where they will matter most.

Control your data.

Your traditional controls probably aren’t sufficiently robust to detect the particular problems that can cause AI bias. You should pay special attention to issues in historical data and data acquired from third parties. Finally, beware of “proxies,” or biased correlations in data sets. Even if, for example, you think that you’re making your data “color blind,” ZIP Codes that may correlate to race used in an algorithm may still allow racism to creep in. If well designed, “synthetic data” — created to fill gaps in data sets — can help reduce bias.

Govern AI at AI speed.

Increasingly, AI is always-on and may use data from across the organization. Your governance should keep up. It should be continuous and enterprise-wide. This governance should include easily understandable frameworks and toolkits as well as common definitions and controls so that both AI specialists and business users can follow the rules and help spot problems before they may proliferate. A systematic approach to continuous management of AI should be critical to building your ongoing AI risk confidence.

Diversify your team.

Recognizing bias is often a matter of perspective, and people from different racial and gender identities and economic backgrounds will notice different biases. Building diverse teams helps reduce the potential risk of bias falling through the cracks. A diverse team will bring together data scientists and business leaders, as well as professionals with different educational backgrounds and experiences, such as lawyers, accountants, sociologists and ethicists. Each will have their own view of the threat of bias and how to help mitigate it.

Validate independently — and continuously.

Just as you would for any other major risk, add an additional, independent line of defense: either an independent internal team or a trusted third party with a proven methodology such as PwC’s Responsible AI team. This line of defense should continually analyze your data and algorithms for fairness. Technology tools such as Bias Analyzer can help automate this process as well as show the costs and benefits associated with a variety of possible mitigation actions.

Get help mitigating bias in your AI models

Learn about PwC Bias Analyzer

Learn more

 

Mitra Best

Technology Impact Leader, PwC US

Email

Anand Rao

Global AI Lead; US Innovation Lead, Emerging Technology Group, Boston, PwC US

Email

Next and previous component will go here

Follow us