Algorithmic impact assessments: What are they and why do you need them?

Example pattern for mobile
Example pattern for desktop

Ilana Golbin

Director and Responsible AI Lead, PwC US

Email

Artificial intelligence (AI) has accelerated innovation across industries, in the process reinventing the way we do business. But what happens when an organization’s governance practices don’t evolve in tandem with its AI initiatives?

That’s when facial recognition tools exhibit racial bias, autonomous vehicles go rogue and targeted ads violate civil rights law. Though these occasions show the increased reliance on AI technology to make critical decisions, they also highlight the need to manage AI risks and adopt responsible, ethical AI practices.

In response, academics, non-governmental organizations (NGOs) and some policymakers recommend the adoption of algorithmic impact assessments (AIAs). 

Designed to evaluate the end-to-end AI life cycle, AIAs provide significant details on AI systems and their impact.

Impact assessments are nothing new for many companies. But to properly govern these diverse systems, assessments should be dynamic in structure and enable modifications that suit an organization’s specific environment.

There is no currently agreed-upon approach to impact assessments. However, one approach is to consider AIAs as extensions to data privacy impact assessments (DPIAs), which are commonly used to address data privacy concerns and to comply with the EU’s General Data Protection Regulation (GDPR). Supported by enhanced governance systems, these impact assessments evaluate potential benefits, risks and remediation processes. 

When personal data is being processed, for instance, a privacy impact assessment can be triggered under GDPR. “Risky” data processing of personally identifiable information may require a DPIA. In some settings, AI has the ability to make non-personal data identifiable. Take a user’s social media “likes.” While not considered personal data under GDPR, “likes” can be used to assume a user’s gender, sexuality, age, race and political affiliations. In spite of this, current DPIAs may not be required for this use case, which points to the need for bridging the gap with an AIA.

Four goals of algorithmic impact assessments

Algorithmic impact assessments, which go even further than DPIAs are designed to achieve four main goals. 

  1. Capture an AI system’s risk. Establishing “risk gating criteria” enables organizations to properly classify the level of scrutiny needed for a specific AI application. For example, a simple workflow automation project is likely to create fewer potential risks than using facial recognition data. Risk criteria may be used to assign proportional governance tasks to a particular risk and may even require additional scrutiny by more senior approvers prior to deployment. 
  2. Cover full development life-cycle requirements. An AIA should encompass strategy and planning, ecosystem analysis, implications to model development, issues related to training data, deployment and, finally, ongoing operation, monitoring issues and governance. By addressing a full range of governance and impact areas, organizations can incorporate ethics into existing compliance exercises with minimal additional burdens. 
  3. Assess impact and increase accountability through a multi-stakeholder analysis. A successful impact assessment engages a broad range of internal stakeholders and may also include external representatives, such as ethics or data review boards. These engagements, which recognize the complex influence an AI system can have, should generate discussions on both business and societal impacts. These assessments also aim to address benefits and risks, the likelihood of both benefits and risks materializing, and the effectiveness of controls designed to help reduce risks. 
  4. Facilitate go/no-go decisions. This goal should address whether a model should move to production, determine if it’s ready to be transitioned for business-as-usual operations and decide whether it should continue as-is or be retrained, redesigned or retired. Here an organization may determine system auditability, additional legal requirements and effectiveness of controls, as well as confirming that it has achieved an appropriate balance of benefits and mitigated risks.

Capitalizing on existing assessments

Because AIAs can be modeled from existing frameworks in data protection, privacy and human rights policies, they may represent an augmented assessment rather than an entirely new process. Impact assessments are likely not new to your organization, so you can use existing assessments as a foundation and build on that. As part of the process, you should ask relevant questions, including: What are the societal and reputational implications associated with not evaluating these cases? How can we conduct a thorough assessment of an AI system without overburdening organizations?

What’s essential to remember is that an AIA can provide essential details on AI systems and their impact, while also helping you manage AI risk and adopt responsible, ethical AI practices. 

In our next blog, we’ll look more closely at the AI life cycle and how AIAs come into play.

Unlock the full potential of analytics and artificial intelligence

Unlock the full potential of analytics and artificial intelligence

PwC’s Analytics & AI Transformation Solution

Learn more

 

Next and previous component will go here

Follow us