National AI policies
Governments are vying with each other to develop national AI strategies to attract and foster business investment and innovation and to educate, train and create a skilled workforce now and into the future. AI policy making requires a number of trade-offs that will ultimately be driven by societal values and what each nation wants.
US city and state governments
Cities and states are focusing on application-specific regulation, instead of sweeping policies about AI as a whole. Take San Francisco—it was the first US city to ban the use of facial recognition technology by municipal departments as part of a broader anti-surveillance ordinance. Other cities in California, Massachusetts and Oregon have since taken similar actions and more are expected to follow. The move has inspired federal regulators to get in on the act as well.
Members of the 116th Congress introduced four pieces of legislation related to facial recognition in 2019. Congresswomen Yvette Clarke (D-NY), Ayanna Pressley (D-MA) and Rashida Tlaib (D-MI) recently introduced a bill that would protect public housing residents from biometric barriers, citing the decreased accuracy of facial recognition when used to identify people of color and women. Recent hearings in the House of Representatives highlighted the consensus between Republicans and Democrats around regulating certain technology that could be used to unfairly discriminate against some communities.
Bank of England
Powered by big data, banks are increasingly using machine learning (a form of AI) to monitor anti-money laundering and fraud detection and to predict mortgage defaults. AI could offer financial services firms faster, leaner operations, reduced costs and improved outcomes for customers. The Bank of England (BoE) is developing a framework that would help answer some of the explainability questions present in machine learning applications—breaking open the technology's “black box.” This framework could be the first step on the road to regulation.
BoE teamed up with the UK’s Financial Conduct Authority (FCA) to survey UK financial institutions and see how they are really using the technology. FCA Executive Director of Strategy and Competition, Christopher Woolard, assured a London audience that financial services firms are not in a “crisis of algorithmic control,” but, regardless, firms need to be “cognisant of the need to act responsibly, and from an informed position.”
BoE guidance suggests placing priority on the governance of data, remembering that machine learning requires human intervention with the right incentives, and that increased execution risks come with the expanded use of AI. The framework around explainability aims to give clarity and transparency. For example, if a machine learning algorithm is used to deny a consumer a mortgage, banks need to be able to explain how that decision was reached.
Federal Deposit Insurance Corporation
With a mission of maintaining stability and public confidence in the nation's financial system, the Federal Deposit Insurance Corporation (FDIC) is in the process of developing guidance for financial institutions on artificial intelligence and machine learning.
FDIC Chairwoman Jelena McWilliams said in August that she would prefer interagency cooperation in creating regulation around the technology, but that the FDIC would move forward regardless. “If our regulatory framework is unable to evolve with technological advances, the United States may cease to be a place where ideas become concepts and those concepts become the products and services that improve people's lives,” said McWilliams in an October speech. “The challenge for the regulators is to create an environment in which fintechs and banks can collaborate.”
American Civil Liberties Union
The American Civil Liberties Union (ACLU) was an early proponent of reining in technology like facial recognition, and as early as 2016 the group worked with cities to help them maximize public influence over decisions around the technology in an effort called Community Control Over Police Surveillance (CCOPS). The ACLU has since followed up with a study of Amazon’s Recognition software that showed it misidentified people of color. In October, the ACLU sued the FBI, the DOJ and DEA to obtain access to documents that would show how the US government is using facial recognition.
Institute of Electrical and Electronics Engineers
In March 2019, the Institute of Electrical and Electronics Engineers (IEEE) released guidelines for creating and using AI systems responsibly. The guidelines take into account personal data rights, legal frameworks for accountability and establishes policies for continued education and awareness.