The quest for truth

Content moderation in action

Truth versus fiction

It used to be easy to tell the difference, but not anymore. Social platforms now give billions of people the ability to express their opinions and share them with family and friends, who, in turn, can distribute them to a much larger audience.

Content platforms work to rebuild trust

These platform companies, which must deal with a growing deluge of user-generated content, have to make Solomon-like decisions about the veracity of this information. If they publish content that is clearly untrue, consumer backlash would likely be swift, threatening revenue and reputation.

On the other hand, if platforms refuse to publish certain content, they may be accused of censorship, bias or having a political agenda. Adding to the problem are algorithms, which can produce filter bubbles that reinforce users’ existing beliefs, rather than showing them a variety of viewpoints.

This is often a lose-lose situation for digital platforms, and a number of executives in the tech industry warn that things could get worse as consumer skepticism and mistrust escalate. So the pressure is mounting on US companies to increase content moderation, especially given the combined effects of ongoing COVID-19 disinformation campaigns, continuing social unrest and the recent US presidential election.

The election and resulting turmoil resulted in risk management implications for technology, media and telecom (TMT) companies. Given today’s volatile environment, it’s critical to find ways to rebuild trust in content platforms.

Digital platforms move toward governance

Content moderation is not new: Publishers have been monitoring comments on their sites for decades. But today’s content is far more abundant, diverse and divisive than ever before. The challenge is defining policies on when and how to delete or label objectionable content, without trampling users’ expectations of being free to engage in any dialogue they choose. 

This also places platforms under pressure to handle reader complaints equitably, remove bias in algorithms and promote transparency in processes and decisions. Some companies are finding ways to rise to this admittedly daunting challenge. 

For example, oversight bodies can bring meaningful transparency to content moderation—a step that consumers and governments are demanding. In fact, more than eight in ten Americans think a content oversight board is a “good or very good idea,” according to research from the Knight Foundation and Gallup. 

After Trump supporters stormed the US Capitol on January 6, a variety of social media companies either suspended or closed Trump's accounts over concerns about ongoing potential for violence. Further, Amazon removed Parler, a popular right-wing social media site, from its hosting services, and Apple and Google removed Parler from their app stores.

Governance efforts grow globally

Meanwhile, the European Union’s revised Audiovisual Media Services Directive (AVMSD) governs the coordination of national legislation on all audiovisual media, including TV broadcasts and on-demand services.

Ireland’s government recently approved the General Scheme of the Online Safety and Media Regulation bill—to handle complaints from users—as well as the detailed drafting of the bill by the Office of the Attorney General. The legislation, which includes safety codes that outline how online video-sharing services will deal with harmful content, will enable the Online Safety Commissioner to regulate online content and apply sanctions for non-compliance.

Today’s content is far more abundant, diverse and divisive than ever before. This places platforms under pressure to handle reader complaints equitably, remove bias in algorithms and promote transparency into processes and decisions.

Questions to ask when building a governance framework

Content moderation requires an effective governance and control framework. Determine whether your company has such a framework by asking the following questions:

  • What are our principles, and how do they relate to our content? Do these principles reflect our various stakeholders, including employees, shareholders and society?
  • How do we effectively enable governance, covering everything from principles to controls?
  • Do we have a governance framework that defines our organizational structure and lines of responsibility for online safety?
  • Is our framework for governance and control a good fit in a regulated environment?
  • Have we established responsibilities for risk management, audits and compliance assessments?
  • How will we review our third-party service providers’ governance processes?
  • Do we have an independent oversight and appeals mechanism for handling contested content?
  • Do we offer users advance notice, a fair hearing and an arbitrated resolution if things go wrong?
  • Do we have a plan for updating or changing our governance policies and procedures as new regulations are drafted?

Legislators call for new rules and industry standards

Some US legislators have called for industry standards to reduce the spread of misinformation, disinformation and synthetic content such as deep fakes. And the US Congress is considering making changes to Section 230 legislation, which shields internet companies from liability for third-party content. One provision says these platforms are not liable if they make good-faith efforts to moderate objectionable content. In October 2020, the Federal Communications Commission announced that more clarity on Section 230 would be forthcoming.

Meanwhile, lawmakers continue to hold hearings in an effort to better understand the situation before making changes. They are divided about whether Section 230 allows tech companies to avoid doing enough to moderate content or requires them to do too much moderation. In response, the platforms have pointed out that it would be almost impossible for them to operate if they could be sued for either posting or deleting too much content. Social media coverage of the recent assault on the US Capitol highlights the precariousness of this balancing act.

Despite these challenges, digital platforms have a chance to bring together consumers and industry and government agencies to develop regulations that would benefit all stakeholders. By taking such a proactive approach, companies could create positive change—something that’s desperately needed in today’s unsettled environment.

The financial sector is a prime example of an industry in which different groups have worked together to initiate needed changes. Major card issuers and banks formed the Payment Card Industry (PCI), which created the Data Security Standard (PCI DSS) that enables uniform global standards. These protocols allow consumers to make secure, seamless electronic payments anywhere in the world. And EMVCo—developed jointly by Europay, MasterCard and Visa—lets the payments industry enforce global acceptance and interoperability.

AI can help—when it’s managed well

As the volume of user-generated content continues to skyrocket past the scope of human content moderators, many companies are turning to AI-based technologies for help. For example, algorithms that use machine learning can decipher what content is most likely to engage a specific user and then serve that person content in line with those preferences. But this can lead to so-called filter, or content, bubbles, which can exacerbate polarization and divisiveness.

In contrast, other applications of AI can support transparency and build trust—especially when humans are in the loop. But algorithms have to be designed, built, implemented, managed and either updated or retired to ensure top performance and accuracy.

AI models also have to be continually monitored and regularly updated, as they can degrade after a few months—particularly in sectors that are constantly evolving, like the news. As a result, building trust in content moderation systems requires building trust in the AI that supports them.

Tools like PwC’s AI Risk Confidence Framework—which provides guidance and controls over the AI lifecycle from end to end—supports active, always-on monitoring of AI. In addition, PwC’s Bias Analyzer provides transparency into how current AI models are performing in terms of biased outcomes. This tool can help a company support AI ethics and bias standards and monitor to see that policies are continually achieved.

AI models also have to be continually monitored and regularly updated, as they can degrade after a few months—particularly in situations such as news, which is constantly changing. As a result, building trust in content moderation systems requires building trust in the AI that supports them.

Building confidence in AI

As the adoption of AI for multiple uses—including content moderation—continues to accelerate, companies should ask some key questions to help them identify problem areas and achieve confidence in their AI systems. These include:

  • Do you have an inventory of all AI and modeling occurring in your company?
  • Where is it being used? With what stakeholders?
  • Are you creating models internally or using third-party vendor models?
  • Which team has governance and oversight over internal AI or third-party vendors that provide AI?
  • What data is being used or created, where is it stored and how is it protected?
  • Where is AI making transformations to your data? How is that process managed?
  • Who owns the strategy for modeling and AI used in your organization?
  • What controls are in place for AI access, usage and modification?
  • What is the process for testing, production and deployment of AI?
  • What is the plan regarding:
    • AI investment?
    • Digitization and the automation journey?
    • Performing internal audits on algorithms and AI in business processes to address content moderation challenges?

How to deal with content moderation challenges

Ultimately, effective content moderation and process transparency are essential to help avoid overregulation and consumer backlash, as well as potential reputational damage and revenue loss. Though some platforms are addressing content moderation challenges, others are working out where and how to start. Follow these steps to get on the path to effective content moderation:

  • Review (or develop, if necessary) your company’s values, purpose and principles. The Responsible AI Toolkit asks: What are your ethics? Are they embedded in your business processes and policies?
  • Use your values as the foundation for your content moderation policies, guidelines and governance framework.
  • Keep up to date with relevant changing regulations and how they could impact your content moderation policies. PwC’s Policy and Regulatory Intelligence solution allows you to monitor existing and emerging regulations and work across the three lines of defense (sales, product and engineering, compliance and internal audit) to adapt your operations to the regulatory environment.
  • Build cross-lines-of-defense teams and charge them with developing—and owning—your content moderation platform and policies.
  • Consider forming an industry consortium to help develop policies, rules and potential regulations.
  • Have a clearly articulated AI risk strategy. If you're yet to get started, craft a plan to begin. AI Risk Confidence can help.
  • Vet and validate your policies and rules with third parties.
  • Ensure that any algorithms used in content moderation are unbiased and easily explainable.
  • Set up governance, such as an oversight board, for evaluating and enforcing your policies and rules. This body could also evaluate and adjudicate any user complaints.
  • Designate accountability throughout your organization. Content governance requires the participation of multiple departments.
  • Create transparency with all stakeholders by making your policies, rules and algorithms available and understandable.
  • Update your policies and rules on a regular basis and when events, consumers or governments take actions that generate the need for updates.

Being a moderator of facts presents challenges; digital platforms need to make a good-faith effort to keep their sites free from misinformation, disinformation, hate speech, fake news and fabricated content. But they don’t have to go it alone. Now is the time for companies to work together, using all the tools available to build—and maintain—trust.

Contact us

Emmanuelle Rivet

Vice Chair, US TMT & Global Technology Leader, PwC US

Rahul Kapoor

Principal, Consulting Solutions, PwC US

Follow us

Required fields are marked with an asterisk(*)

By submitting your email address, you acknowledge that you have read the Privacy Statement and that you consent to our processing data in accordance with the Privacy Statement (including international transfers). If you change your mind at any time about wishing to receive the information from us, you can send us an email message using the Contact Us page.

Hide