The AI performance gap widened during the COVID-19 crisis. How do you measure up?
Although most companies are only experimenting with AI, leaders who embed it more fully gain a wide range of benefits.
We surveyed 1,018 global executives to better understand the application and impact of AI adoption through the COVID-19 crisis. Respondents were split fairly evenly between those saying that COVID-19 had a negative impact on their business and those who reported a positive impact,2 with larger companies (with revenues exceeding US$10bn) more likely to have experienced benefits. These larger companies—nearly four in ten—had invested more in AI development before the pandemic began and were moving from testing to operational use of AI. They also reported that they had benefitted from a return on AI investment during the pandemic and were significantly more likely to increase their use of AI, to explore new use cases for AI, and to train more employees to use AI.
We found this was also true for smaller companies that had heavily invested in AI prior to the pandemic. What’s more, an in-depth examination of AI dynamics in India showed that early adopters benefitted from better decision-making using AI, leading to enhanced employee and customer health and safety during the pandemic. That research also showcased other benefits, such as productivity improvement and design innovation through the application of AI-enabled tools. (For more, see AI: An opportunity amidst a crisis.)
The overall picture is of a virtuous cycle for those that invested heavily in AI pre-COVID-19, one that tends to widen the intelligence gap. Organisations with more mature AI adoption increased AI usage during the pandemic by 57%—more than twice the increase of early-stage implementers—and they plan to increase investment and adoption going forward. A downward cycle, by contrast, afflicts companies that didn’t invest, are performing poorly and are struggling to find funding for AI. A good place to start in reversing this dynamic is in better understanding the impact of AI efforts. Leading companies create targeted measures of ROI on AI, are better able to fully articulate use cases and align them with these ROI metrics, and thus achieve greater buy-in from senior leadership.
The ability to operationalise AI effectively—what we call AI maturity—will be key to both maintaining progress among leaders and closing the gap for laggards. Our survey allowed us to group companies into three levels of AI maturity: those with fully embedded AI (25% of respondents), companies at the experimental stage of AI implementation (55%) and companies still exploring AI without having implemented anything (20%).
Embedding leadership. Those that had fully embedded AI typically had done so across their business processes and with widespread adoption. Many of these companies had ten or more AI applications in deployment, ranging from customer-focused applications (such as chatbots and conversational systems, demand forecasting and customer targeting) to back-office applications, including contract analysis, invoice processing and risk management. Others had deployed five or more AI applications. Not surprisingly, more of the larger companies (nearly 34%) had fully embedded AI. Reinforcing our findings on benefits, we found that these companies with AI fully embedded had returns that outperformed their counterparts during the pandemic, and are also investing more in AI, looking ahead to further improvements in the post-pandemic world.
Gaining scale to capture returns. Fully embedding AI across the enterprise and across all functional areas is a significant challenge. As companies move from building standalone models (as an AI foundation), to capturing value by using AI to better foresee changing business conditions (through prediction-as-a-service tools), to exploiting the full power of AI by automating and tracking operations in model factories and beyond, they will need to invest in a range of capabilities, including:
domain experts from business units to articulate use cases
data engineers and data scientists who understand how information flows and can build machine-learning models
systems analysts and software developers who can build software systems, along with machine-learning engineers who can optimise models for added value
ModelOps, DataOps and DevOps specialists who can maintain embedded AI models
governance and ethics support initiatives to enable effective stewardship over these systems.
Bringing together talent, processes and models, as well as the agility to adjust AI systems as needed, is key to locking in scale. As our research in India has shown, those skills will allow companies to target the most promising business use cases, ease the transition from pilots to broad implementation, and deliver AI’s promised strategic benefits of growth and resilience. That same work also suggests that successful companies can strengthen their competitive advantage by more effectively personalising customer experiences, putting in place tools for dynamic pricing, employing automated intelligence systems that safeguard against fraud, and embracing virtual assistants to leverage employee knowledge and skills.
As companies gain momentum in deploying AI models and systems at scale, we have seen another divide appear: differing capabilities for identifying, mitigating and managing AI risks. These risks cross areas such as bias in hiring models, customer privacy, transparency in AI use (requiring both accountability and the explainability of processes and results), and security of data and systems. In our survey, only 12% of companies (and 29% of those with deeply rooted AI approaches) had managed to fully embed AI risk-management and controls and automate them sufficiently to achieve scale. Another 37% of respondents reported strategies and policies in place to tackle AI risks.
When we asked about the specifics of risk-management strategy, we found that algorithmic bias in modelling (often involving race or gender) is a central focus of nearly 36% of all respondents and close to 60% of companies that have fully embedded AI. Reliability and robustness of models, security, and data privacy are among other AI risks more prominently addressed by companies that have successfully scaled their AI efforts.
Managing the full range of risk across the AI horizon will require better tools, beginning with a responsible AI framework for assessing needed steps, and the ability to conduct proper AI risk assessment. With those elements as a foundation, companies will find it easier to embed leading practices and governance as they build, deploy and monitor AI software and use it for decisions. Starting this journey sooner rather than later will enable leaders to gain the trust of customers and better navigate coming regulatory changes. Doing so will also extend the competitive advantages these leaders are enjoying from AI.
Want more insights on the big issues facing business and society? Sign up here and be the first to hear our latest thinking from Take on Tomorrow. You’ll also receive complimentary access to the latest digital edition of PwC’s strategy+business magazine, our award-winning management publication for decision makers in businesses and organisations around the world.
Want to know more about the experts behind the articles? Our series offers the latest thinking from senior leaders across our global network. From ESG transformation and the future of work to AI applications and digital currencies, our authors’ insights draw on decades of experience to help businesses across industries look ahead and take on tomorrow’s greatest challenges.Get to know our authors
Get your business ready for what comes next
Using our market leading studies, data, and expert analyses, we pinpoint the forces making an immediate impact on your business—and empower you to reinvent the future by examining global macrotrends, exploring sector-specific shifts, and discovering the latest technological tools to drive change.Find out more here