Containing and adapting to digital wildfires

Authors: John Regas and Elizabeth Cartier

Hyperconnectivity has given unprecedented speed and reach to the dissemination of ideas, information, and opinions. This phenomenon continues to open new avenues for creativity and free speech on the one side, but also allows lies, threats and even violent hysteria to spread like wildfires on the other. How can these storms be contained without dimming the fire of free speech?

Sir Tim Berners-Lee, the creator of the World Wide Web, has described the web as a “global conversation”. In any conversation of course, both truth and lies can be shared. Unlike face-to-face conversations, communication in the online world is instantaneous, reaches across continents and cultures and is often received without the interpretative intuition and context present in the physical world. There’s little doubt that the global risk of misinformation and intentional disinformation is growing.

Digital wildfires are a major part of this risk. A digital wildfire is the sudden emergence of fast-moving social, political, cultural and sometimes violent activity that spreads through the use of information and communications technology (ICT). Digital wildfires — be they accidental or deliberate — and mass reactions to them are becoming more frequent and more consequential for governments, the private sector and NGOs.

More than two years ago, Niall Ferguson correctly anticipated a ‘World on Wi-Fire’ in which hyperconnectivity could amplify humankind’s most primeval emotions on a large scale.[1] The proliferation of technology, devices, content and ‘apps’ means that news and information can be captured and communicated as quickly as it is created. According to Cisco, the number of devices connected to IP networks will be nearly three times as high as the global population by 2017[2] — a trend known as the Internet of Things (IoT).[3]

This interwoven technological fabric serves as an ideal setting for accelerating the incidence of digital wildfires. More importantly, individuals’ reactions — largely based on emotional responses — can quickly spiral into contagion. Memes, photos and videos were ‘going viral’ as early as 2009. However, government and business leaders underestimated the phenomenon.

The wake-up call did eventually come. The rapid cascade of changes in the Arab world in late 2010 and early 2011 resulted in global recognition that wildfires had the potential to create political and social challenges to governance of nation states. Over the past year, signposts that the ramifications are not limited to the geopolitical realm have emerged as the global private sector and markets also have experienced repercussions from digital wildfires.

The private sector relies on functioning markets and public confidence in the system of those markets. In an age of volatility, digital wildfires can rapidly erode customer confidence in existing businesses and business models, and public confidence in the underlying financial and economic systems.

In its Global Risks 2014 report, the World Economic Forum (WEF) calls for new thinking to prevent a large-scale loss of trust in the Internet — which it says could cause “Digital Disintegration”.

In this article, we examine a number of solutions for digital fire prevention, fire-containing and fire-proofing. How can a blend of rules, behaviours, education, market services and technology itself be used to deter, douse or develop resilience to digital wildfires?

Fire prevention: How to stop wildfires igniting

Walking the line on digital ethos

As the WEF pointed out in its earlier Global Risks 2013 report, “around the world, governments are grappling with the question of how existing laws which limit freedom of speech, for reasons such as incitement of violence or panic, might also be applied to online activities”.

It’s a fine line. Imposed standards run the risk of inhibiting civil liberties in some countries, particularly if those advocating Internet freedom agendas gain greater traction.

In 2012, a United Nations body called the International Telecommunication Union (ITU) gathered to discuss regulating the Internet. A number of countries have indicated a desire to be able to censor the Internet. This raises uncomfortable issues over curbs on freedom of speech for many other parts of the world. It could also potentially erode people’s trust in their own governments.

What we may see developing instead is industry-wide codes of conduct or regulation. As an example, in June 2013, potential vulnerabilities in medical technology prompted the US Food and Drug Administration (FDA) to update its 2005 technology draft guidance on cybersecurity in medical devices.

The rise of ‘native advertising’ and ‘sponsored content’ — advertising that looks like original content — in the online world has also drawn attention from US regulators. Surveys of online publishers in the US indicate that more than 70% use this form of advertising, with an additional 17% considering using these approaches in 2014. The US Federal Trade Commission (FTC) — responsible for protecting consumers and guarding against deceptive advertising practices — is investigating the effects of advertisements that look increasingly like original content and has stated it will enforce the rules against misleading advertising.[4]

In November 2013, the FTC demonstrated its rising involvement in the increasing connectivity of the IoT. By hosting a workshop to explore consumer security and privacy concerns, they began to look at how to regulate these areas.

While not yet directly related to speech regulation, these actions lay the groundwork for government involvement in the freewheeling data flow in the global ICT fabric, whether personal data, commercial content — or even speech. Other countries may watch the FTC’s deliberations and launch their own enquiries.

Over time, we see the potential for regulatory regimes on advertising to emerge as a way to protect consumers from deliberate manipulation if the information about a product or service is clearly false. NGOs, such as Public Citizen, which advocate consumer protection will increasingly be stakeholders in the regulatory debate, and may bring new capabilities to detect deception.

Similar regulation could come from other industries that may focus on the emerging digital vulnerabilities in transport, financial services, aerospace and defence.

While prospects for global governance standards for the Internet remain uncertain, we expect that norms of behaviour — a digital ethos — might well emerge first. A digital ethos is more palatable to many public and private sector organisations. This ethos could seek intervention in the most heinous and deliberate attempts to light digital wildfires.

Many societies have moral and legal precedents for regulating against deliberate acts that endanger human life, property, and public safety. For example, in many countries, misuse of emergency numbers can be a crime. Social media companies and the public sector could work to establish protected hashtags or handles. They could encourage users pushing information to, say, @911, @112, @999 or #emergencyresponseneeded to verify that the information is true to the best of their knowledge, under the risk of prosecution for transmitting false information.

Regulators could hold social media service providers and apps owners accountable for wilful premeditated and egregious plots to endanger human life, though the difficulties of controlling areas such as copyright infringement suggest that these companies’ focus would be to take down offensive material reported by the public, rather than to engage in wholesale monitoring and prevention of such cases.

Fire-proofing: How to build resilience against digital wildfires

In the absence of global regulation or a well-established digital ethos, the general public as well as private organisations need to take steps to lessen the impact of digital wildfires when they break out.

Build cyber-fortresses

Increasingly, malicious cyber-operations contribute to digital wildfires. In the WEF’s Global Risks 2014 report, cyber-attacks moved into the top five most likely global risks. Cyber-attacks can cause damage to the brand, reputation, business operations and even market share of those organisations that are targeted. The tricks used by the perpetrators include hijacking a company’s social media sites or disrupting a trusted news source, as the New York Times experienced when its website was disrupted[5].

There are a number of actions companies can take to build resilience:

  • Beef up cybersecurity to prevent unwanted guests getting into their social media sites. Cybersecurity will have an increasingly high-profile role in containing the spread of digital wildfires. Although technology itself has a major role here, the people, organisation, culture and processes of a company all contribute to — or detract from — its cybersecurity effectiveness.

  • Develop a robust online presence to create trust. One of the most effective moves companies can make is creating a robust, trustworthy, and cyber-secure digital presence of their own. Digital presence includes all web content and continuous engagement on all forms of social networking applications — microblogging, blogging, professional networking and even pop culture fora. Each of these channels needs to be trusted, accountable, branded and well-monitored. Communication needs to be candid and must instantly address consumer fears.

It’s a worthwhile investment. In times of crisis, these sites become safe havens of verified brand information to which consumers can turn. Companies can make something positive out of ‘negative’ events as they build rapport with customers and prove responsiveness and attention to customers’ concerns. US energy company Consolidated Edison (Con Ed) saw the benefits of this approach in 2012 during Hurricane Sandy. When alarmist and incorrect tweets were spread saying that Con Ed was shutting down power to the entire island of Manhattan[6], Con Ed was able to mitigate damage by denying the false information via social media feeds, including its Twitter account[7].

  • Build from trust into customer engagement and competitive advantage. This trusted ‘voice’ also opens the door to engaging users, customers, and citizens in a much closer dialogue than before. Through their robust digital presence, companies can co-design products, services and even regulation on a scale not previously seen. This new kind of relationship means the customer will be more resistant to, or objective about, damaging information spread about the company. A robust online presence starts to create competitive advantage.

  • Have contingency plans tested and at the ready. If the worst does happen, companies will want to be able to enact their prepared contingency plans, speedily. Exercising contingency plans through the use of scenarios can improve responses when real-world crises emerge. Contingency plans can include the use of alternative social media feeds and immediate campaigns through traditional media to convey appropriate responses as quickly as possible and stem damage as far as possible.

Get better at getting down to facts

People form opinions instantly. We turn to instant news and our peers’ reactions in social media to make daily decisions in our business and personal lives. The result is mass consensus opinion which can often lack any basis of evidence.

Most of us aren’t even aware that we are ‘clustering’ with like-minded people, forming our views through a phenomenon that Ethan Zuckerman identified as ‘filtering bubbles.’[8] Simply put, we tend to seek out and put more weight on information that is consistent with what we already believe — and filter out that which isn’t. This innately human trait is known as confirmation bias.

What’s more, our filters are reinforced by the tools we use to ‘discover’ information. An ever-increasing number of devices connected to the Internet are designed to find what most people have already found, or what markets want a person to find. They fail to deliver novel information — and many of us aren’t even aware that this is happening.

So instead of democratising knowledge, increased ICT use is amplifying confirmation bias on a mass scale. If a wildfire confirms something we tend to believe, we are more likely to spread it.

How can technologists, innovators, educators, companies and governments help us counter this bias?

  • Fact-checking in the digital age. To meet the challenge of digital wildfires, there could be a rise in fact-checking services and NGOs being formed to track the spread of factually incorrect information. Companies and industry associations could create or hire non-profit firms known for their independence. These firms would ‘audit’ social media covering the companies — and correct misinformation and disinformation by highlighting errors in mainstream media and launching campaigns to counter false perceptions by customers and stakeholders.

    Another solution might emerge as innovators turn the power of algorithms and bots onto digital wildfires. Bots are pieces of code that — like robots in the industrial world — perform specific tasks in the digital world. Innovators can design bots and algorithms to identify misinformation and quickly contain the contagion of instant opinion. Harnessing and directing bots to perform fire-fighting has great potential.

  • Educate to minimise confirmation bias. Education and literacy in ICT will need to evolve. As early as primary school, students should learn the importance of original source research and develop critical analytic skills and the ability to understand data in its full context. Educators, think tanks and government-owned media outlets need to help the general public understand how search algorithms and social media actually work. In a more self-aware society, individuals can be selective in whether they want to find what others are finding, or discover new and untested sources of information.

    Training programmes do exist in a number of countries, particularly in the world of intelligence. Intelligence organisations are classic targets of deliberate attempts to mislead and deceive, so over the last three decades they have honed ‘analytic tradecraft’ training for their professionals to identify, and try to compensate for, confirmation bias entering into their assessments. These can be adapted for mainstream education.

    Public health education campaigns can also provide a source of inspiration for digital wildfire-fighters. Recognising that the contagion-like qualities of inaccurate and misleading information, ideas and beliefs has parallels with behaviours known to spread disease and other risks to human health, governments, companies and NGOs can draw upon best practices in successful global public health campaigns.

  • Use media to underscore that there can be smoke without fire. The more that media covers false information events, the greater public scepticism will grow. In April 2013, fans of the celebrity singer Cher mistakenly believed she had died, after reading a confusing hashtag referring to the death of former UK Prime Minister Margaret Thatcher[9]. In this case, the repercussions weren’t serious — but the fact that both individuals and media news sources widely accepted the news as authentic highlights the lack of verification methods currently available.

Other events have been more economically and geopolitically consequential. A recent tweet about a past military action by one nation against another in the Middle East caused temporary jitters in the oil markets. It was quickly discovered to be a historical incident. Each mainstream incident of false information reported as ‘real news’ undercuts the public’s faith in social media news sources, increasing scepticism and tendencies towards caution in the public sphere.

In the financial world, digital wildfires can instantly form expectations and beliefs with larger economic consequences. Following the 2013 Thanksgiving holiday weekend in the US, the New York Times and Wall Street Journal reported as fact that sales were down over the previous year. Barry Ritholtz of Bloomberg View rapidly responded by criticising the ‘innumeracy’ of the media as the methods used over the last decade have proved not to have any predictive value.[10] He used his personal social media presence and Bloomberg’s to contain the contagion of erroneous information.

People wishing to expose vulnerabilities in the system could go one stage further by intentionally feeding the system with false information, then revealing this as a hoax.

Media outlets and the world’s most prominent social media providers can seize these opportunities to educate the public. Social media companies — while not restricting content or trending — could provide analysis of trends and content to give customers the option to check the veracity of what they are seeing.

Contain wildfires further: Ratings for everything

Ratings have long held sway in entertainment, universities and financial investments. The power of credit ratings gained global significance during the most recent global economic crisis. Respected rating agencies could help contain the spread of the wildfires.

We see ratings emerging on a micro-scale in the virtual world. eBay, Yelp, TripAdvisor and other rated services platforms are increasingly powerful forces in the markets for some goods and services. Sites like the news and information aggregator Reddit give users ‘karma’ as other users vote posted content and commentary up the ratings. The karma system allows users to identify regular users of the site, making it more difficult to masquerade as a long-term, verified user if representing commercial interests, political movements, or malicious actors.

A further level of verification is applied by human moderators in some cases if requested. Substantial numbers of followers on Twitter may serve as a proxy for verification.

These systems are not perfect, as necessitating moment-to-moment human involvement likely sacrifices too much efficiency to be effective. Additionally, requiring users to be verified in order to trust their content could destroy anonymity or the efficacy of crowdsourcing.

For many, the anonymity of the Internet is a necessity. This includes dissidents, whistle-blowers and anyone who wants to express a belief outside of increasingly controlling employee media policies. Separately, the community of verified users would be necessarily smaller, reducing the breadth of access that social networks currently enjoy.

Rating schemes for media and social media content providers will emerge naturally over time. Governments and NGOs can be instrumental in sponsoring these rating schemes while recognising that they too will become subject to ratings.

Agile adaptation

It’s unlikely that society will be able to totally prevent the spread of digital wildfires. After prevention, the next best outcome is extinguishing the fires. And if that doesn’t work, sound fire-proofing is a must — being prepared to act quickly and adapt when the flames burst out.

This is what resilient organisations do. They know that one day they are likely to be affected by a disruption such as a digital wildfire. They have their fire engines ready. They have planned how they will contain the fire so that it causes the least damage, and might even create some opportunity.

Which begs the question, is there an upside to all this?

In nature, fire can also be a catalyst for new life. In society, might the process of attacking digital wildfires just prompt the mass re-birth of critical thinking?

Isn’t it a good thing that the private sector turns to its social media channels as a way to establish deeper, transparent, reciprocal and trusted relationships with stakeholders?

‘Critical infrastructure breakdown’ is the first technological risk to make it to the top five list of most impactful global risks in the WEF’s Global Risk 2014 report. In that light, isn’t it good practice for companies to build resilience by stress-testing and scenario planning regularly for wildfires? We think so.



[3] David Burg, September 24, 2013, The Internet of Things Raises New Security Questions, http://usblogs.pwc.com/emerging-technology/the-internet-of-things-raises-new-security-questions/

[4] New York Times, “As Online Ads Look More Like News Articles, F.T.C. Warns Against Deception,” December 4, 2013. http://www.nytimes.com/2013/12/05/business/ftc-says-sponsored-online-ads-can-be-misleading.html?pagewanted=2&_r=0

[5] New York Times, “Times Site is Disrupted in Attack by Hackers,” August 27, 2013, http://www.nytimes.com/2013/08/28/business/media/hacking-attack-is-suspected-on-times-web-site.html?pagewanted=all&_r=0

[6] CNN, Doug Gross, “Man faces fallout for spreading false Sandy reports on Twitter,” October 31 2012, http://edition.cnn.com/2012/10/31/tech/social-media/sandy-twitter-hoax/

[7] The Huffington Post, Bianca Bosker, ”Behind @ConEdison: The 27 year old preventing panic, one tweet at a time,” November 3 2012, http://www.huffingtonpost.com/2012/11/03/conedison-twitter_n_2069744.html

[8] TEDGLOBAL 2010, Ethan Zuckerman, July 15, 2010, http://blog.ted.com/2010/07/15/listening_to_gl/

[10] Bloomberg View, Barry Ritholtz, “No, Sales Didn't Fall 2.7 Percent Thanksgiving Weekend,” December 2, 2013, http://www.bloomberg.com/news/2013-12-02/no-sales-didn-t-fall-2-7-percent-thanksgiving-weekend.html