1. AI will impact employers before it impacts employment
AI likely won’t devastate the job market in the long run—and it certainly won’t do so in 2018. But organizations face a challenge: AI works best when it brings together data and teams from different disciplines. It also requires structures and skills for human-machine collaboration.
But most organizations keep data in cartels and teams in silos. Few have started work on giving employees the basic AI skills that they’ll need. The average enterprise isn’t ready for what AI is about to demand of it.
2. AI will come down to earth—and get to work
It may not attract media headlines, but AI is ready right now to automate increasingly complex processes, identify trends to create business value, and provide forward-looking intelligence. This AI is often “entering through the backdoor” as everyday applications incorporate it.
The result is less busywork for humans and better strategic decisions: employees working better than before. But since traditional ROI measures may not capture this value, organizations will want to consider new ones to better understand what AI can do for them.
3. AI will answer the big question about data
Many investments in data technology and integration have failed to answer the big question: Where’s the ROI? But AI is now delivering business cases for data initiatives, and new tools are making these initiatives more affordable than before.
Organizations no longer need to decide to “clean up data”—nor should they. They should start with a business problem and first quantify the benefits of AI. Once data is used to solve one specific problem, further data-driven AI solutions become easier, and a virtuous cycle can begin. The catch? Some organizations are still struggling with data fundamentals.
4. Functional specialists, not techies, will decide the AI talent race
There’s a bidding war right now for computer scientists, but top tech talent is not enough for AI success. Organizations need domain experts who can work with AI and AI specialists. They won’t have to be programmers. They will have to understand the basics of data science and data visualization and something of how AI “thinks.”
As AI leaves the computer lab and enters everyday work processes, these domain experts will be even more important than computer scientists. Many functional specialists will need to upskill appropriately.
5. Cyberattacks will be more powerful because of AI—but so will cyberdefense
Intelligent malware and ransomware that learns as it spreads, machine intelligence coordinating global cyberattacks, advanced data analytics to customize attacks—unfortunately, it’s all on its way.
Organizations can’t bring a knife to a gun fight. They’ll have to fight AI with AI. Since even AI-wary organizations will have no choice but to deploy AI cyberdefense, cybersecurity will be many enterprises’ first foray with AI.
6. Opening AI’s black box will become a priority
AI spinning out of control isn’t a danger for 2018. It’s not smart enough right now. But AI that acts inexplicably—and therefore makes leaders and consumers wary of using it—is a real risk.
Pressure will grow to open up “black boxes” and make AI explainable. But that involves trade-offs in cost and performance. Enterprises need frameworks to assess business, performance, regulatory, and reputational concerns as they decide the right level of AI explainability.
7. Nations will spar over AI
AI is a gigantic opportunity, and many governments are working to make sure that their countries get a big piece of the pie. Canada, Japan, the UK, Germany, and the UAE all have national AI plans. Tax reform and deregulation in the US may give AI a boost in the US.
China stands apart in how it’s prioritizing AI for its economic future. Its efforts are already bearing fruit and may lead to a “Sputnik moment:” the US could start to fear the loss of its technological superiority.
8. Pressure for responsible AI won’t be on tech companies alone
Invasion of privacy, algorithmic bias, environmental damage, threats to brands and the bottom line—the fears around AI are numerous. Fortunately, a global consensus is emerging around principles for responsible AI. These principles can safeguard organizations—and position them to reap economic benefits.
Self-regulatory organizations will likely be a growing solution to the gaps in responsible AI usage that regulators—often challenged to keep up with the latest technologies—leave behind.