By now, we’ve all experienced disinformation in political and social spheres firsthand. We’ve witnessed consequences of “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office,” according to a warning raised in a 2018 in-depth report.
Misinformation is false information spread with no ill intent. Disinformation is generally understood as content that is fabricated, manipulated and distributed by imposters or content that is presented in a false context with intent to harm.
Now, disinformation is moving into the corporate sector. Organized crime and sophisticated actors borrow disinformation techniques and methods and use the same creation and distribution mechanisms used in politically motivated disinformation campaigns.
In one notable instance of disinformation, a forged US Department of Defense memo stated that a semiconductor giant’s planned acquisition of another tech company had prompted national security concerns, causing the stocks of both companies to fall. In other incidents, widely publicized unfounded attacks on a businessman caused him to lose a bidding war, a false news story reported that a bottled water company’s products had been contaminated, and a foreign state’s TV network falsely linked 5G to adverse health effects in America, giving the adversary’s companies more time to develop their own 5G network to compete with US businesses.
Perhaps most frightening: As defenses against disinformation improve, disinformants simply innovate, steadily coming up with new strategies for evading detection.
It used to be difficult to mount a disinformation campaign. Criminals and unscrupulous governments used to operate disinformation-as-a-service (DaaS) in underground forums. But commercial DaaS purveyors have emerged in many countries, and they routinely advertise to the private sector. They publish articles in media sources ranging from dubious websites to reputable news outlets. They even create and maintain authentic-looking social media accounts in bulk. And they use established as well as new accounts to propagate content without triggering content moderation controls.
Disinformation campaigns are asymmetric in nature, meaning they are inexpensive to create and distribute at scale. DaaS providers now charge clients hundreds of dollars to hundreds of thousands of dollars for highly customizable packages. Here’s a sample of costs: $15-$45 to create a 1,000-character article; $65 to contact a media source directly to spread material; $100 for 10 comments to post on a given article or news story; $350-$550 per month for social media marketing; and $1,500 for search engine optimization services to promote social media posts and articles over a 10- to 15-day period.
On the surface, disinformation may resemble other types of fraud, public relations crises or cyber attacks, as all share some common features. But it differs in several significant respects.
Amplification via superspreaders. Users of social media platforms can easily spread disinformation, causing it to take on new, unpredictable and even more harmful dimensions.
Falsehoods spread farther, faster and deeper than true information, according to a 2019 MIT study. Its longitudinal study of news stories on a specific social media platform from 2006 to 2017 found that false news reports are 70% more likely to be retweeted than true news stories — and they reach the first 1,500 people six times faster. (The effect is more pronounced with political news than other categories.) The study also found that bots spread true and false information at the same rates and concluded that individuals are the ones amplifying the false information.
That’s why it’s so difficult to stop disinformation in its tracks. Attackers’ planned actions easily blend in with normal, spontaneous activity, as audiences unknowingly collaborate in the distribution of damaging information.
Technology that makes it more difficult to tell what’s real and true. Deepfake technology, considered the nuclear option of disinformation techniques given its sophistication and remarkable effectiveness, is now easier than ever to produce and distribute online. There were 7,964 deepfake videos online at the beginning of 2019. The number was 14,678 nine months later. Deepfakes, usually presented in video format and sometimes in audio, are synthetic media built with the help of deep learning. The technology is designed to trick viewers or listeners into believing a fake event or an untrue message, and it’s sophisticated enough that few viewers can distinguish deepfakes from genuine images.
Lack of regulation. Disinformation is manufactured in the form of algorithmically spread conspiracy theories and politically motivated fringe rhetoric. Barring any major federal regulation restricting the creation and distribution of deepfakes, newer, perhaps even more effective tools and techniques are expected to continue emerging.
Commercial services are beginning to respond to the needs of companies and journalists to track and combat misinformation and disinformation. An emerging entrant, Repustar, is a fact-check distribution platform that provides tools for consumers to help make sense of social media claims as well as tools for content professionals to be factually accurate in what they are putting out.
Certain types of organizations are more susceptible targets for disinformation campaigns. “Celebrity” CEOs with significant social media presence can be targeted through hacks of social media accounts. Among other potential targets: A company that’s vocal about its stance on controversial issues (political, social, environmental or otherwise); a business making a public transaction or deal such as launching an IPO, conducting a merger or acquisition, rebranding or reorganizing; a new company experiencing a surge in demand for a particular product or service.
Disinformation comes in multiple types and forms. One way to classify them is by motive. What’s the endgame of those responsible for any given disinformation attack? What are the perpetrators trying to achieve?
Dive into more executive research and perspectives on cyber, privacy and trust in technology
Using software powered by artificial intelligence (AI), bandits spoofed the voice of an energy firm’s CEO, demanding the fraudulent transfer of close to a quarter of a million dollars to a supplier in another country.
Misidentifying themselves, thieves used an online ad to attract work-at-home employees under false pretenses. In the repackaging and reshipping scheme, workers who thought they’d been hired by a trucking company unknowingly mailed parcels overseas to the wrong addresses. Workers were never paid and owners of stolen credit cards were charged for goods never received.
A story on a (faked) news site falsely reported that a major social media company had received a $31 billion takeover bid. The hoax, created via a fake domain registered to a proxy service in a foreign country, drove a spike in the company’s shares.
A semiconductor giant’s planned merger with another tech company was interrupted due to false news alleging a governmental review of the transaction due to (nonexistent) national security threats. The scam incorporated a fake Department of Defense memo. Stocks of both companies fell temporarily.
A foreign adversary disseminated unscientific claims alleging 5G’s dire health threats through a TV segment falsely linking 5G to brain cancer, infertility, autism, heart tumors and Alzheimer’s disease. The resulting public concern may have slowed the implementation of 5G in the US, giving the adversary’s companies more time to develop their own 5G network.
Users of an anonymous imageboard website, seeking to hurt a “liberal” business, spread a rumor that a coffee giant was giving free drinks to undocumented immigrants. Plotters posted a fake promotional meme with a misleading hashtag on social media outlets. The false controversy was designed to damage the target company’s brand.
False information, launched with malicious intent, has led to financial losses, loss of customer trust and brand damage for companies around the world. It actively threatens all businesses.
Companies facing these risks should be asking themselves:
What might put my organization particularly at risk?
Which of our processes and programs are most likely to be hurt by disinformation?
What can we do to make our business less vulnerable to the advanced technology — deepfakes, audio, social — as well as less complex technology frequently used to carry out disinformation campaigns?
How would we respond if we were the victim of a disinformation attack?
As of now, no mechanisms exist for governments to help companies fight disinformation aside from traditional legal and regulatory channels, which don’t work quickly enough to counteract disinformation rapidly spread through social media and other online communication routes. Consequently, companies need to be prepared to take action themselves.
You’re already taking action to ward off fraud. But because of the distinct nature of and expected growth in disinformation attacks, it’s time to review your approach and rethink how you can protect yourself.
Identify the disinformation actors, their methods and associated risks representing the greatest threat to your company.
Quantify risks. Are you facing disinformation campaigns focused on financial gain, competition, general disruption, political messaging or something else?
If you take a stance on a controversial issue — perhaps related to politics, religion or social trends — make sure that you measure the risks and rewards associated with taking that position. What’s the potential fallout — customer defections, revenue loss, reputational impact — of taking a stand versus not doing so?
Develop a deeper understanding of how media manipulation tactics can be used to create distrust, destabilize organizations and inflict harm on people and communities.
Engage in third-party monitoring and sentiment analysis. What are people saying about your company, your brand, your employees and your products and services? What kind of conversation about your organization is occurring in the marketplace, and what kind of impact is it having?
Identify and follow the influencers who are most likely to spread disinformation. Who are they, who are their backers and where are they based geographically?
Maintain “information golden sources.” Leading influencers, some of whom have millions of followers, can generate greater engagement and higher response rates than ads. They could be your organization’s greatest allies. Conversely, they also have the potential to become disinformation superspreaders. Choose carefully when you opt to promote or reward any given influencer.
Build a community of advocates on social media and establish an ongoing positive narrative around your company. You can more effectively combat disinformation by establishing a voice in the marketplace now, before such an attack occurs, than waiting until it happens and you must come to your own defense.
Strive to hold a continuous, authentic conversation with customers in interactions and channels — digital or otherwise. Consistency and frequency matter. Understand how your customers connect with your brand. The idea is this: When negative information surfaces, you want your customers to turn to you first to verify the facts.
Be ready to take the mic. It’s impossible to plan for every potential disinformation scheme. A false narrative containing a nugget of truth or a sheen of believability can bring reputational risk to a firm. If you’re not ready to redirect the conversation, you may find yourself on the defensive or playing catch-up.
Connect continuously with your business partners, not just your customers. Disinformation attacks can affect whole ecosystems and industries, not just single company targets. In dealing with a social media attack on one pharma company, Health-ISAC shared with others to help them prepare for similar attacks.
Beware of becoming an inadvertent or accidental part of a supply chain of misinformation. Establish good governance around the facts and sources that your PR/comms teams use, what they retweet on social media and what they publish as thought leadership. All these matter because they embody your brand. Be especially vigilant if you are in an industry that intersects with public interest such as transportation, energy, food supply, healthcare, waste management or construction, to name a few.
Develop a playbook, test it and be ready to put it into action when disinformation arises. Practice for a disinformation attack like you would for other types of attacks, through simulations and exercises.
Perform a stakeholder analysis to understand the ecosystem of those you may need to communicate with in the event of a disinformation attack. Identify who will be accountable for each stakeholder group and how the messages will be approved and delivered.
Craft the types of narratives you need for different attacks. Prepare narratives focused on topics of particular relevance to your industry (e.g., workplace safety, product safety), issues that your organization advocates or issues rooted in your community or geographical location.
Establish a system to measure the effectiveness of your response and identify lessons learned to better prepare for the next incident.
In disinformation attacks, your reputation as a good steward of information matters. Can you be trusted with sensitive information about your customers, employees and others? Do you have a data trust strategy? How would your data governance, discovery, protection and minimization practices hold up to scrutiny?
Disinformation attacks are most successful when directed at companies that haven’t engendered trust. The right type of brand, one known as trustworthy, can serve as a bulwark against disinformation.
Our PwC colleagues Petar Petrov and Anjali Fehon contributed to the research for this article.
Cybersecurity and Privacy solutions
Global Cybersecurity & Privacy Leader, US Cyber, Risk and Regulatory Leader, PwC US