Skip to content Skip to footer

Loading Results

Top Policy Trends 2020: Artificial Intelligence

Shifts in 2020

Regulators are starting to translate principles into policy frameworks to manage the unintended consequences of artificial intelligence. Nations and businesses are making huge investments in building AI talent and infrastructure, causing policymakers to realize they need to move quickly to address negative impacts that have already surfaced, as well as those still to be discovered. China, for instance, is spending heavily to build up a $150 billion domestic AI industry. At last count, there are more than 30 countries that have or are developing AI strategies.

AI regulation is a balancing act: how to encourage innovation while also protecting citizens and society from negative effects like algorithmic bias and intrusions into privacy. In 2020, city, state and national authorities will explore rules to make machine learning accountable, explainable, ethical, fair and safe.

55% of respondents are “very actively” looking to shape policies surrounding regulation of emerging technology.

Source: PwC Election 2020 Poll, November 2019

The seven influencers

National AI policies

Governments are vying with each other to develop national AI strategies to attract and foster business investment and innovation and to educate, train and create a skilled workforce now and into the future. AI policy making requires a number of trade-offs that will ultimately be driven by societal values and what each nation wants.

View more

US city and state governments

Cities and states are focusing on application-specific regulation, instead of sweeping policies about AI as a whole. Take San Francisco—it was the first US city to ban the use of facial recognition technology by municipal departments as part of a broader anti-surveillance ordinance. Other cities in California, Massachusetts and Oregon have since taken similar actions and more are expected to follow. The move has inspired federal regulators to get in on the act as well.

View more


Members of the 116th Congress introduced four pieces of legislation related to facial recognition in 2019. Congresswomen Yvette Clarke (D-NY), Ayanna Pressley (D-MA) and Rashida Tlaib (D-MI) recently introduced a bill that would protect public housing residents from biometric barriers, citing the decreased accuracy of facial recognition when used to identify people of color and women. Recent hearings in the House of Representatives highlighted the consensus between Republicans and Democrats around regulating certain technology that could be used to unfairly discriminate against some communities.

View more

Bank of England

Powered by big data, banks are increasingly using machine learning (a form of AI) to monitor anti-money laundering and fraud detection and to predict mortgage defaults. AI could offer financial services firms faster, leaner operations, reduced costs and improved outcomes for customers. The Bank of England (BoE) is developing a framework that would help answer some of the explainability questions present in machine learning applications—breaking open the technology's “black box.” This framework could be the first step on the road to regulation.

BoE teamed up with the UK’s Financial Conduct Authority (FCA) to survey UK financial institutions and see how they are really using the technology. FCA Executive Director of Strategy and Competition, Christopher Woolard, assured a London audience that financial services firms are not in a “crisis of algorithmic control,” but, regardless, firms need to be “cognisant of the need to act responsibly, and from an informed position.”

BoE guidance suggests placing priority on the governance of data, remembering that machine learning requires human intervention with the right incentives, and that increased execution risks come with the expanded use of AI. The framework around explainability aims to give clarity and transparency. For example, if a machine learning algorithm is used to deny a consumer a mortgage, banks need to be able to explain how that decision was reached.

View more

Federal Deposit Insurance Corporation

With a mission of maintaining stability and public confidence in the nation's financial system, the Federal Deposit Insurance Corporation (FDIC) is in the process of developing guidance for financial institutions on artificial intelligence and machine learning.

FDIC Chairwoman Jelena McWilliams said in August that she would prefer interagency cooperation in creating regulation around the technology, but that the FDIC would move forward regardless. “If our regulatory framework is unable to evolve with technological advances, the United States may cease to be a place where ideas become concepts and those concepts become the products and services that improve people's lives,” said McWilliams in an October speech. “The challenge for the regulators is to create an environment in which fintechs and banks can collaborate.”

View more

American Civil Liberties Union

The American Civil Liberties Union (ACLU) was an early proponent of reining in technology like facial recognition, and as early as 2016 the group worked with cities to help them maximize public influence over decisions around the technology in an effort called Community Control Over Police Surveillance (CCOPS). The ACLU has since followed up with a study of Amazon’s Recognition software that showed it misidentified people of color. In October, the ACLU sued the FBI, the DOJ and DEA to obtain access to documents that would show how the US government is using facial recognition. 

View more

Institute of Electrical and Electronics Engineers

In March 2019, the Institute of Electrical and Electronics Engineers (IEEE) released guidelines for creating and using AI systems responsibly. The guidelines take into account personal data rights, legal frameworks for accountability and establishes policies for continued education and awareness. 

View more

31% of respondents looking to shape emerging technology policy plan to publish a position paper.

Source: PwC Election 2020 Poll, November 2019

How to prepare for the shift

Companies are acutely aware of the risk of AI policy that could stifle innovation and their ability to grow in the digital economy. For example, regulation that clamps down heavily on consumer data could mean that businesses are unable to properly train their algorithms.

It’s in a company’s interests to tackle risks related to data, governance, outputs, reporting, machine learning and AI models, ahead of regulation. Do those who build and operate the AI system in the company take steps towards explainability, and do they focus on those affected by it? Is the company addressing larger issues around data and tech ethics through collaboration with customers, industry peers, regulators and tech companies?

Business leaders need to bring together people from across the organization to oversee accountability and governance of the technology. Oversight should be from a diverse team that includes people who have business, IT and specialized AI skills. Instating a governance group that represents all parts of the organization will prevent developing duplicate or incompatible efforts.

Since AI’s risks vary depending on the tech and its application, rigid rules won’t work. It’s better to create a playbook that guides to guide users through multiple possibilities, while ensuring enterprise-wide consistency. The playbook should include principles for governing data and opportunities that the company isn’t sure yet how to use.

Contact us

Anand Rao

Global AI Lead; US Innovation Lead, Emerging Technology Group, PwC US

Follow us

Required fields are marked with an asterisk(*)

By submitting your email address, you acknowledge that you have read the Privacy Statement and that you consent to our processing data in accordance with the Privacy Statement (including international transfers). If you change your mind at any time about wishing to receive the information from us, you can send us an email message using the Contact Us page.