Sir Tim Berners-Lee, the creator of the World Wide Web, has described the web as a “global conversation”. In any conversation of course, both truth and lies can be shared. Unlike face-to-face conversations, communication in the online world is instantaneous, reaches across continents and cultures and is often received without the interpretative intuition and context present in the physical world. There’s little doubt that the global risk of misinformation and intentional disinformation is growing.
Digital wildfires are a major part of this risk. A digital wildfire is the sudden emergence of fast-moving social, political, cultural and sometimes violent activity that spreads through the use of information and communications technology (ICT). Digital wildfires — be they accidental or deliberate — and mass reactions to them are becoming more frequent and more consequential for governments, the private sector and NGOs.
More than two years ago, Niall Ferguson correctly anticipated a ‘World on Wi-Fire’ in which hyperconnectivity could amplify humankind’s most primeval emotions on a large scale. The proliferation of technology, devices, content and ‘apps’ means that news and information can be captured and communicated as quickly as it is created. According to Cisco, the number of devices connected to IP networks will be nearly three times as high as the global population by 2017 — a trend known as the Internet of Things (IoT).
This interwoven technological fabric serves as an ideal setting for accelerating the incidence of digital wildfires. More importantly, individuals’ reactions — largely based on emotional responses — can quickly spiral into contagion. Memes, photos and videos were ‘going viral’ as early as 2009. However, government and business leaders underestimated the phenomenon.
The wake-up call did eventually come. The rapid cascade of changes in the Arab world in late 2010 and early 2011 resulted in global recognition that wildfires had the potential to create political and social challenges to governance of nation states. Over the past year, signposts that the ramifications are not limited to the geopolitical realm have emerged as the global private sector and markets also have experienced repercussions from digital wildfires.
The private sector relies on functioning markets and public confidence in the system of those markets. In an age of volatility, digital wildfires can rapidly erode customer confidence in existing businesses and business models, and public confidence in the underlying financial and economic systems.
In its Global Risks 2014 report, the World Economic Forum (WEF) calls for new thinking to prevent a large-scale loss of trust in the Internet — which it says could cause “Digital Disintegration”.
In this article, we examine a number of solutions for digital fire prevention, fire-containing and fire-proofing. How can a blend of rules, behaviours, education, market services and technology itself be used to deter, douse or develop resilience to digital wildfires?
As the WEF pointed out in its earlier Global Risks 2013 report, “around the world, governments are grappling with the question of how existing laws which limit freedom of speech, for reasons such as incitement of violence or panic, might also be applied to online activities”.
It’s a fine line. Imposed standards run the risk of inhibiting civil liberties in some countries, particularly if those advocating Internet freedom agendas gain greater traction.
In 2012, a United Nations body called the International Telecommunication Union (ITU) gathered to discuss regulating the Internet. A number of countries have indicated a desire to be able to censor the Internet. This raises uncomfortable issues over curbs on freedom of speech for many other parts of the world. It could also potentially erode people’s trust in their own governments.
What we may see developing instead is industry-wide codes of conduct or regulation. As an example, in June 2013, potential vulnerabilities in medical technology prompted the US Food and Drug Administration (FDA) to update its 2005 technology draft guidance on cybersecurity in medical devices.
The rise of ‘native advertising’ and ‘sponsored content’ — advertising that looks like original content — in the online world has also drawn attention from US regulators. Surveys of online publishers in the US indicate that more than 70% use this form of advertising, with an additional 17% considering using these approaches in 2014. The US Federal Trade Commission (FTC) — responsible for protecting consumers and guarding against deceptive advertising practices — is investigating the effects of advertisements that look increasingly like original content and has stated it will enforce the rules against misleading advertising.
In November 2013, the FTC demonstrated its rising involvement in the increasing connectivity of the IoT. By hosting a workshop to explore consumer security and privacy concerns, they began to look at how to regulate these areas.
While not yet directly related to speech regulation, these actions lay the groundwork for government involvement in the freewheeling data flow in the global ICT fabric, whether personal data, commercial content — or even speech. Other countries may watch the FTC’s deliberations and launch their own enquiries.
Over time, we see the potential for regulatory regimes on advertising to emerge as a way to protect consumers from deliberate manipulation if the information about a product or service is clearly false. NGOs, such as Public Citizen, which advocate consumer protection will increasingly be stakeholders in the regulatory debate, and may bring new capabilities to detect deception.
Similar regulation could come from other industries that may focus on the emerging digital vulnerabilities in transport, financial services, aerospace and defence.
While prospects for global governance standards for the Internet remain uncertain, we expect that norms of behaviour — a digital ethos — might well emerge first. A digital ethos is more palatable to many public and private sector organisations. This ethos could seek intervention in the most heinous and deliberate attempts to light digital wildfires.
Many societies have moral and legal precedents for regulating against deliberate acts that endanger human life, property, and public safety. For example, in many countries, misuse of emergency numbers can be a crime. Social media companies and the public sector could work to establish protected hashtags or handles. They could encourage users pushing information to, say, @911, @112, @999 or #emergencyresponseneeded to verify that the information is true to the best of their knowledge, under the risk of prosecution for transmitting false information.
Regulators could hold social media service providers and apps owners accountable for wilful premeditated and egregious plots to endanger human life, though the difficulties of controlling areas such as copyright infringement suggest that these companies’ focus would be to take down offensive material reported by the public, rather than to engage in wholesale monitoring and prevention of such cases.
In the absence of global regulation or a well-established digital ethos, the general public as well as private organisations need to take steps to lessen the impact of digital wildfires when they break out.
Increasingly, malicious cyber-operations contribute to digital wildfires. In the WEF’s Global Risks 2014 report, cyber-attacks moved into the top five most likely global risks. Cyber-attacks can cause damage to the brand, reputation, business operations and even market share of those organisations that are targeted. The tricks used by the perpetrators include hijacking a company’s social media sites or disrupting a trusted news source, as the New York Times experienced when its website was disrupted.
There are a number of actions companies can take to build resilience:
It’s a worthwhile investment. In times of crisis, these sites become safe havens of verified brand information to which consumers can turn. Companies can make something positive out of ‘negative’ events as they build rapport with customers and prove responsiveness and attention to customers’ concerns. US energy company Consolidated Edison (Con Ed) saw the benefits of this approach in 2012 during Hurricane Sandy. When alarmist and incorrect tweets were spread saying that Con Ed was shutting down power to the entire island of Manhattan, Con Ed was able to mitigate damage by denying the false information via social media feeds, including its Twitter account.
People form opinions instantly. We turn to instant news and our peers’ reactions in social media to make daily decisions in our business and personal lives. The result is mass consensus opinion which can often lack any basis of evidence.
Most of us aren’t even aware that we are ‘clustering’ with like-minded people, forming our views through a phenomenon that Ethan Zuckerman identified as ‘filtering bubbles.’ Simply put, we tend to seek out and put more weight on information that is consistent with what we already believe — and filter out that which isn’t. This innately human trait is known as confirmation bias.
What’s more, our filters are reinforced by the tools we use to ‘discover’ information. An ever-increasing number of devices connected to the Internet are designed to find what most people have already found, or what markets want a person to find. They fail to deliver novel information — and many of us aren’t even aware that this is happening.
So instead of democratising knowledge, increased ICT use is amplifying confirmation bias on a mass scale. If a wildfire confirms something we tend to believe, we are more likely to spread it.
How can technologists, innovators, educators, companies and governments help us counter this bias?
Other events have been more economically and geopolitically consequential. A recent tweet about a past military action by one nation against another in the Middle East caused temporary jitters in the oil markets. It was quickly discovered to be a historical incident. Each mainstream incident of false information reported as ‘real news’ undercuts the public’s faith in social media news sources, increasing scepticism and tendencies towards caution in the public sphere.
In the financial world, digital wildfires can instantly form expectations and beliefs with larger economic consequences. Following the 2013 Thanksgiving holiday weekend in the US, the New York Times and Wall Street Journal reported as fact that sales were down over the previous year. Barry Ritholtz of Bloomberg View rapidly responded by criticising the ‘innumeracy’ of the media as the methods used over the last decade have proved not to have any predictive value. He used his personal social media presence and Bloomberg’s to contain the contagion of erroneous information.
People wishing to expose vulnerabilities in the system could go one stage further by intentionally feeding the system with false information, then revealing this as a hoax.
Media outlets and the world’s most prominent social media providers can seize these opportunities to educate the public. Social media companies — while not restricting content or trending — could provide analysis of trends and content to give customers the option to check the veracity of what they are seeing.
Ratings have long held sway in entertainment, universities and financial investments. The power of credit ratings gained global significance during the most recent global economic crisis. Respected rating agencies could help contain the spread of the wildfires.
We see ratings emerging on a micro-scale in the virtual world. eBay, Yelp, TripAdvisor and other rated services platforms are increasingly powerful forces in the markets for some goods and services. Sites like the news and information aggregator Reddit give users ‘karma’ as other users vote posted content and commentary up the ratings. The karma system allows users to identify regular users of the site, making it more difficult to masquerade as a long-term, verified user if representing commercial interests, political movements, or malicious actors.
A further level of verification is applied by human moderators in some cases if requested. Substantial numbers of followers on Twitter may serve as a proxy for verification.
These systems are not perfect, as necessitating moment-to-moment human involvement likely sacrifices too much efficiency to be effective. Additionally, requiring users to be verified in order to trust their content could destroy anonymity or the efficacy of crowdsourcing.
For many, the anonymity of the Internet is a necessity. This includes dissidents, whistle-blowers and anyone who wants to express a belief outside of increasingly controlling employee media policies. Separately, the community of verified users would be necessarily smaller, reducing the breadth of access that social networks currently enjoy.
Rating schemes for media and social media content providers will emerge naturally over time. Governments and NGOs can be instrumental in sponsoring these rating schemes while recognising that they too will become subject to ratings.
It’s unlikely that society will be able to totally prevent the spread of digital wildfires. After prevention, the next best outcome is extinguishing the fires. And if that doesn’t work, sound fire-proofing is a must — being prepared to act quickly and adapt when the flames burst out.
This is what resilient organisations do. They know that one day they are likely to be affected by a disruption such as a digital wildfire. They have their fire engines ready. They have planned how they will contain the fire so that it causes the least damage, and might even create some opportunity.
Which begs the question, is there an upside to all this?
In nature, fire can also be a catalyst for new life. In society, might the process of attacking digital wildfires just prompt the mass re-birth of critical thinking?
Isn’t it a good thing that the private sector turns to its social media channels as a way to establish deeper, transparent, reciprocal and trusted relationships with stakeholders?
‘Critical infrastructure breakdown’ is the first technological risk to make it to the top five list of most impactful global risks in the WEF’s Global Risk 2014 report. In that light, isn’t it good practice for companies to build resilience by stress-testing and scenario planning regularly for wildfires? We think so.
 Niall Ferguson, “World on Wi-Fire”, October 3, 2011. http://www.thedailybeast.com/newsweek/2011/10/02/world-on-wi-fire-technology-feeds-mind-boggling-volatility.html
 CISCO, “The Zetta-Byte Era: Trends and Analysis, May 29, 2013, http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/VNI_Hyperconnectivity_WP.html
 David Burg, September 24, 2013, The Internet of Things Raises New Security Questions, http://usblogs.pwc.com/emerging-technology/the-internet-of-things-raises-new-security-questions/
 New York Times, “As Online Ads Look More Like News Articles, F.T.C. Warns Against Deception,” December 4, 2013. http://www.nytimes.com/2013/12/05/business/ftc-says-sponsored-online-ads-can-be-misleading.html?pagewanted=2&_r=0
 New York Times, “Times Site is Disrupted in Attack by Hackers,” August 27, 2013, http://www.nytimes.com/2013/08/28/business/media/hacking-attack-is-suspected-on-times-web-site.html?pagewanted=all&_r=0
 CNN, Doug Gross, “Man faces fallout for spreading false Sandy reports on Twitter,” October 31 2012, http://edition.cnn.com/2012/10/31/tech/social-media/sandy-twitter-hoax/
 The Huffington Post, Bianca Bosker, ”Behind @ConEdison: The 27 year old preventing panic, one tweet at a time,” November 3 2012, http://www.huffingtonpost.com/2012/11/03/conedison-twitter_n_2069744.html
 International Business Times, April 8, 2013, http://www.ibtimes.com/cher-dead-nowthatchersdead-sparks-cher-death-hoax-twitter-after-margaret-thatcher-dies-1178345
 Bloomberg View, Barry Ritholtz, “No, Sales Didn't Fall 2.7 Percent Thanksgiving Weekend,” December 2, 2013, http://www.bloomberg.com/news/2013-12-02/no-sales-didn-t-fall-2-7-percent-thanksgiving-weekend.html