By now, we’ve all experienced disinformation in political and social spheres firsthand. We’ve witnessed consequences of “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office,” according to a warning raised in a 2018 in-depth report.
Misinformation is false information spread with no ill intent. Disinformation is generally understood as content that is fabricated, manipulated and distributed by imposters, content that is presented in a false context with intent to harm.
Now, disinformation is moving into the corporate sector. Organized crime and sophisticated actors borrow disinformation techniques and methods and use the same creation and distribution mechanisms used in politically motivated disinformation campaigns.
In one notable instance of disinformation, a forged US Department of Defense memo stated that a semiconductor giant’s planned acquisition of another tech company had prompted national security concerns, causing the stocks of both companies to fall. In other incidents, widely publicized unfounded attacks on a businessman caused him to lose a bidding war, a false news story reported that a bottled water company’s products had been contaminated, and a foreign state’s TV network falsely linked 5G to adverse health effects in America, giving the adversary’s companies more time to develop their own 5G network to compete with US businesses.
Perhaps most frightening: As defenses against disinformation improve, disinformants simply innovate, steadily coming up with new strategies for evading detection.
It used to be difficult to mount a disinformation campaign. Criminal actors and nation-states used to operate disinformation-as-a-service (DaaS) in underground forums. But commercial DaaS purveyors have emerged in many countries, and they routinely advertise to the private sector. They publish articles in media sources ranging from dubious websites to more reputable news outlets. They even create and maintain authentic-looking social media accounts in bulk. And they use established as well as new accounts to propagate content without triggering content moderation controls.
Disinformation campaigns are asymmetric in nature, meaning they are inexpensive to create and distribute at scale. DaaS providers now charge clients hundreds of dollars to hundreds of thousands of dollars for highly customizable packages. Here’s a sample of costs: $15-$45 to create a 1,000-character article; $65 to contact a media source directly to spread material; $100 for 10 comments to post on a given article or news story; $350-$550 per month for social media marketing; and $1,500 for search engine optimization services to promote social media posts and articles over a 10- to 15-day period.
On the surface, disinformation may resemble other types of fraud, public relations crises or cyber attacks, as all share some common features. But it differs in several significant respects.
Amplification via superspreaders. Users of social media platforms can easily spread disinformation, causing it to take on new, unpredictable and even more harmful dimensions.
Falsehoods spread farther, faster and deeper than true information, according to a 2019 MIT study. Its longitudinal study of news stories on a specific social media platform from 2006 to 2017 found that false news reports are 70% more likely to be retweeted than true news stories — and they reach the first 1,500 people six times faster. (The effect is more pronounced with political news than other categories.) The study also found that bots spread true and false information at the same rates and concluded that individuals are the ones amplifying the false information.
That’s why it’s so difficult to stop disinformation in its tracks. Attackers’ planned actions easily blend in with normal, spontaneous activity, as audiences unknowingly collaborate in the distribution of damaging information.
Technologies that make it more difficult to tell what’s real and true. Deepfake technology, considered the nuclear option of disinformation techniques given its sophistication and remarkable effectiveness, is now easier than ever to produce and distribute online. There were 7,964 deepfake videos online at the beginning of 2019. The number was 14,678 nine months later. Deepfakes, usually presented in video format and sometimes in audio, are synthetic media built with the help of deep learning. The technology is designed to trick viewers or listeners into believing a fake event or an untrue message, and it’s sophisticated enough that few viewers can distinguish deepfakes from genuine images.
Lack of regulation. Disinformation is manufactured in the form of algorithmically-spread conspiracy theories and politically motivated fringe rhetoric. Barring any major federal regulation restricting the creation and distribution of deepfakes, newer, perhaps even more effective tools and techniques are expected to continue emerging.
Commercial services inevitably are beginning to respond to needs of companies and journalists to track and combat misinformation and disinformation. An emerging entrant, Repustar, is a fact-check distribution platform that provides tools for consumers to help make sense of social media claims as well as tools for content professionals to be factually accurate in what they are putting out.
Certain types of organizations are more susceptible targets for disinformation campaigns. “Celebrity” CEOs with significant social media presence can be targeted through hacks of social media accounts. Among other potential targets: A company that’s vocal about its stance on controversial issues — political, social, environmental, or otherwise; a business making a public transaction or deal such as launching an IPO, conducting a merger or acquisition, rebranding or reorganizing; a new company experiencing a surge in demand for a particular product or service; competitors to national champions of a nation-state, especially in the technology space.
Disinformation comes in multiple types and forms. One way to classify them is by motive: What’s the endgame of those responsible for any given disinformation attack? What are the perpetrators trying to achieve?
False information, launched with malicious intent, has led to financial losses, loss of customer trust and brand damage for companies around the world. It actively threatens all businesses.
Companies facing these risks should be asking themselves:
As of now, no mechanisms exist for governments to help companies fight disinformation aside from traditional legal and regulatory channels, which don’t work quickly enough to counteract disinformation rapidly spread through social media and other online communication routes. Consequently, companies need to be prepared to take action themselves.
You’re already taking action to ward off fraud. But because of the distinct nature of and expected growth in disinformation attacks, it’s time to review your approach and rethink how you protect yourself.
Take the lead: Chief risk officer, chief information security officer, chief data officer, chief privacy officer
Take the lead: Chief communications officer, investor relations director, public relations director, social media director
Take the lead: Chief marketing officer, brand leader, chief communications officer, investor relations director, public relations director
Take the lead: Chief communications officer and crisis management executives
In disinformation attacks, your reputation as a good steward of information matters. Can you be trusted with sensitive information about your customers, employees and others? Do you have a data trust strategy? How would your data governance, discovery, protection and minimization practices hold up to scrutiny?
Disinformation attacks are most successful when directed at companies that haven’t engendered trust. The right type of brand, one known as trustworthy, can serve as a bulwark against disinformation.
Our PwC colleagues Petar Petrov and Anjali Fehon contributed to the research for this article.