How do you make artificial intelligence (AI) part of everyday operations, so it can help make those operations more automated and data-driven? For many companies, the answer requires rethinking parts of their organization. True AI leaders have an operating model, technology stack and ways of working that deploy, empower and scale AI — while also governing it effectively and allocating scarce resources to where they’ll do the most good.
Based on our survey of over 1,000 executives working with AI in organizations throughout the US, as well as the experiences and analyses of PwC’s AI and analytics specialists, here are three ways to help create an organization that can truly operationalize AI.
One factor that sets AI leaders apart is their AI operating model. Rather than segregating AI specialists and tools, they align AI operations with those for automation, data and analytics. If they already have centralized administration for these other technologies, then they also create an AI center of excellence or appoint an AI leader. If an existing data analytics or automation group is mature enough, they can “add on” AI to it. If their businesses are largely federated, they may choose to delegate AI strategy and governance to each line of business. The critical factor is choosing a model that will work well with the automation, analytics and IT teams that AI will both support and depend on.
Whether your AI operating model is centralized or delegated to individual business units, it should have three tiers. At the top comes a steering committee, with senior executives participating. This committee provides executive sponsorship, sets AI and analytics priorities, approves funding, provides strategic guidance and monitors progress toward objectives.
Next comes an operations committee. This committee contains representatives from the business, AI specialists, and technology and change management specialists. It helps to prioritize specific AI use cases, assign resources and provide guidance to the third tier: the execution teams. These meet regularly to develop and deploy AI in line with the operation committee’s instructions.
Critical to each of these three tiers is governance that can keep up with AI’s complex technology and ever-evolving data sets and algorithms. These three tiers then become three lines of defense. First at the level of overall priorities, then for more tactical quality assurance or execution, and finally during execution, these teams watch for compliance and business risks, as well as AI-specific risks such as algorithmic bias.
With clearly defined governance, roles and responsibilities, this three-tiered model can help turn AI into an engine, powering your organization to meet its top goals.
The top tech-related challenge for AI is not raw computational power. It’s a lack of labeled data that you can use to train your AI models. In many companies, legacy software can be an obstacle, rather than part of the solution — if it integrates poorly with your sources of data and your existing or potential AI.
To close that gap, you will likely need to upgrade your tech, but there’s no widely accepted tool stack for AI. There’s not even a standard interface for integrating technology tools for AI. Still, there are good options out there. To choose the right one — whether you build or buy it, go to the cloud or keep it all on premise — focus on those two imperatives: integration and data. Some tech platforms, for example, ingest data from internal and third-party sources that range from application programming interfaces (APIs) to PDFs, standardize that data and help verify its accuracy and regulatory compliance. That can’t entirely eliminate the need for your subject matter experts to label data for AI, but it can help.
With technology tools that help you overcome your data challenges, you can achieve much faster (and much more cost-effective) operationalizing of AI. The end result can be an AI model factory: hundreds or thousands of AI models, with automated “pipelines” to continuously integrate, deploy and learn from new data. With data intake and model retraining largely automated, you’ll have increasingly better data powering more subtle and accurate algorithms.
Consider too how potential AI technology tools align with your existing technology choices and sources of data. If, for example, a vendor is already helping you process your data, and your key employees are already trained on its systems, it may be best to stick with them — so long as they support both the AI tools you need and the flexible architecture to change up those tools as needed.
Like most technologies, AI has a life cycle. It usually begins with a proof of concept, moves on to a pilot that proves real-work viability, then scales up to achieve real and (ideally) continually increasing value. Unlike most technologies, AI usually requires new roles for this life cycle.
When a proposed AI solution is at the proof-of-concept or the pilot-and-prove stages, it requires continuing collaboration among engineers, data scientists and the business. Centralized administration and governance, along with cross-functional task forces, will help foster this collaboration — but it won’t be enough. Engineers and data scientists trained in AI usually can explain their needs to each other. But they and subject matter experts in the business or back office may struggle to communicate and unite around a solution.
That’s why many AI leaders have AI architects in their AI operating and execution groups. Like solutions architects in traditional app development, AI architects are responsible for creating a technical solution that meets the business’s needs. They help assess what data an AI model will need, which experiments can best train and develop that model, what retraining and monitoring will be required, and how to integrate the model into an application or business process.
AI architects may be traditional solution architects with solid data science expertise, or data scientists with a background in software engineering. They are also often the ones best placed to identify the highest-value use cases for AI, helping to best allocate scarce resources.
Once a potential use case has successfully passed through the proof-of-concept and pilot-and-proof stages, it’s time to make it an actual and valuable AI solution: an asset that scales, is fault-tolerant, runs on your chosen platform, and can meet deployment needs. For this step, two additional roles come into play.
The first role is a machine learning engineer, with both data science and software engineering skills. The second is model operations specialist, to manage post-deployment model performance. Together, these two roles supervise and integrate data, AI models, and supporting software throughout the AI life cycle. These specialists also help drive the scientific, experimental mindset that AI needs: one in which hypotheses are continually challenged and models are continually improved.