New structures for responsible AI
As organizations face pressure to design, build, and deploy AI systems that deserve trust and inspire it, many will establish teams and processes to look for bias in data and models and closely monitor ways malicious actors could “trick” algorithms. Governance boards for AI may also be appropriate for many enterprises.
Public-private partnerships and public-citizen partnerships
One of the best ways to use AI responsibly is for public and private sector institutions to collaborate, especially when it comes to AI’s societal impact. Likewise, as more governments explore the use of AI to distribute services efficiently, they’re engaging citizens in the process. In the UK, for example, the RSA (Royal Society for the encouragement of Arts, Manufactures and Commerce) is conducting a series of citizen juries on the use of AI and ethics in criminal justice and democratic debate.
Self-regulatory organizations to facilitate responsible innovation
Since regulators may scramble to keep up, and self-regulation has its limits, self-regulatory organizations (SROs) may take the lead with responsible AI. An SRO would bring users of AI together around certain principles, then oversee and regulate compliance, levy fines as needed, and refer violations to regulators. It’s a model that has worked in other industries. It may well do the same for AI and other technologies.