No Match Found
There has been much talk recently about the importance of environmental, social and governance (ESG) initiatives — and rightfully so. A growing number of businesses now recognize the imperative to prioritize people and the planet ahead of profits.
Companies are also harnessing the power of AI by recognizing its potential harms and instead, using them as motivators to institute responsible AI development, procurement and usage practices. These two trends, ESG and responsible AI (RAI), have some common purposes: They are aligned with values designed to mitigate risks and realize potential.
These initiatives involve companies asking some key questions:
While there is a common thread between ESG and RAI, these themes may be promoted by different groups within organizations. For example, responsible AI may be driven by technical leadership, whereas ESG initiatives may originate from the corporate social responsibility (CSR) side of a business. However, their commonalities and shared purpose should be evaluated because, in order to make progress on either effort effectively, the two initiatives should be aligned.
AI systems may pose a significant threat to sustainability goals due to the heavy computing power needed to train large neural networks. By understanding this impact and instituting practices that prefer smaller models — which often are easier to understand and interpret — companies can potentially reap benefits for their ESG sustainability measures.
Understanding where to align AI development in order to support environmental initiatives could advance both RAI and ESG goals.
Furthermore, using AI to identify sustainability improvements in various areas of the business — including managing data center cooling and improving the operations of supply chains to help reduce waste — responds to many employees' requests to use AI for societal good.
A common concern around AI focuses on people. Are they treated fairly, or are existing societal inequities replicating or even amplifying? How is the company using individuals' data? How is customers’ privacy protected? Is a potential technology purchase designed to be human-centered? What impact have new technologies had on stakeholders, including customers, employees and society?
Though 36% of executives we surveyed say algorithmic bias is a primary risk area, there is an indication that many businesses have already acknowledged the societal risks posed by AI and are looking to mitigate them by designing technology that's aligned with key values, including fairness, explainability, privacy, beneficence, etc. In this way, companies can use AI systems to make progress toward social goals while also mitigating harms that could impact the societal elements of ESG. Implementing RAI helps incorporate the ethical principles set forth by organizations and governments.
Instituting a holistic approach to governance involves processes, policies and standards. It also aligns with development team members who promote governance that is tech-enabled rather than simply tech-first. Effective governance of AI manages for impact by considering shifting regulatory requirements and emerging organizational approaches. Responsible technology practices require effective and agile governance — both within an organization and across the regulatory and public policy landscape.
The questions that ESG’s priorities raise for business are often similar to the questions that arise in the RAI space. How can success or failure be measured? How can a company enable effective change management that supports innovation aligned to the values of the organization? Who is responsible for owning progress in the RAI or ESG space? Finally, how can a business monitor and audit results in these areas?
When companies align their ESG and RAI — or, more broadly, responsible tech — initiatives, they can achieve synergies of shared resources and capabilities; more efficient prioritization of actions that build on a common core set of training and change management processes and reporting capabilities; and greater alignment across objectives.
Leaders of ESG initiatives may include those working in the sustainability office, operations officers, human capital leaders, supply chain executives and others. Leaders of responsible tech typically come from more technical portions of a business, such as the chief analytics officer, chief data officer or — due to emerging regulations around AI and data — a privacy and compliance officer.
Establishing an open, trusted channel of communication between these groups can help uncover related initiatives that can be used to amplify both — prioritizing AI to advance sustainability efforts, consolidation of efforts around reporting, and allowing ESG, RAI and even AI return on investment (ROI) to be presented in a common forum. Identifying areas of collaboration and mutual benefit can provide more value to the organization and more visibility into the work done by the ESG and RAI teams. This increased focus spotlights the benefit of incorporating both ESG and RAI into “business as usual,” and strengthens the momentum both have across strategic and operational initiatives.
Effective governance addresses risks and harms without stifling innovation. This balance can be difficult to achieve, which is why engaging with the teams that will be governed — specifically, those creating technology for internal or external consumption so they understand the benefits of governance and can right-size it to address the need — goes a long way toward achieving adoption. For instance, helping development teams understand the carbon footprint of the models they are building may push these teams to consider architectures that are simpler, more environmentally friendly and potentially easier to explain.
Technology often enters a company through procurement, but procurement teams may not be in the position to evaluate complex software, especially AI models and algorithms. Requiring these teams to conform to ESG guidelines, as well as to identify ethically developed technology, is likely to be a challenge. Investing in ethical technology requires an appreciation of how that technology was developed, with what data, and how it will be managed and maintained. These questions are especially important when considering systems that use AI to evaluate mass quantities of data to generate predictions or recommendations.
As all of us become more knowledgeable about the societal and environmental impacts of the technology we create, sell, use and maintain, we can benefit from aligning our management of these technologies and their impacts, including responsible AI, with the management of related spaces, including ESG.