The quest for truth:

Content moderation in action

Truth versus fiction.

It used to be easy to tell the difference, but not anymore. Social platforms now give billions the ability to express their opinions and share them with family and friends, who, in turn, can distribute them to a much larger audience.

Content platforms work to rebuild trust

These platform companies, which must deal with a growing deluge of user-generated content, have to make Solomon-like decisions about the veracity of this information. If they publish content that is clearly untrue, consumer backlash would likely be swift, threatening revenue and reputation.

On the other hand, if platforms refuse to publish certain content, they may be accused of censorship, bias or having a political agenda. Adding to the problem are algorithms, which can produce filter bubbles that reinforce users’ existing beliefs, rather than showing them a variety of viewpoints.

This is often a lose-lose situation for digital platforms, and a number of executives in the tech industry warn that things could get worse as consumer skepticism and mistrust escalate. So the pressure is mounting on US companies, as their need for content moderation continues to increase due to ongoing COVID-19 disinformation, continuing social unrest and the upcoming presidential election.

The election is also likely to have risk management implications at technology, media and telecom (TMT) companies: Recently, 48% of finance leaders at TMT companies said they would increase investment in risk management if President Trump is re-elected, while 36% would do so if Joe Biden is elected.

Given today’s volatile environment, it’s critical to find ways to rebuild trust in content platforms.

Digital platforms move toward governance

Content moderation is not new: Publishers have been monitoring comments on their sites for decades. But today’s content is far more abundant, diverse and divisive than ever before. The challenge is defining policies on when and how to delete or label objectionable content, without trampling users’ expectations of being free to engage in any dialogue they choose.

This also places platforms under pressure to handle reader complaints equitably, remove bias in algorithms, and promote transparency into processes and decisions. Some companies are finding ways to rise to the challenge.

Facebook, for example, has created an Oversight Board of independent global experts to make decisions on its most challenging content moderation cases — including appeals from users. The board is also charged with determining whether Facebook is adhering to its own policies and rules.

Such oversight bodies can bring meaningful transparency to content moderation — a step that consumers and governments are demanding. In fact, more than eight in 10 Americans think a content oversight board is a “good or very good idea,” according to research from the Knight Foundation and Gallup.

In addition to its Oversight Board, Facebook has announced policies to deal with election-related misinformation and voter suppression. Nick Clegg, Facebook’s head of global affairs, told the Financial Times that the company proactively developed plans to handle various election outcomes.

TikTok is also taking steps to improve transparency and reduce misinformation. Its product and policy teams are studying accounts and video information that might be linked to misinformation. TikTok’s transparency center illustrates how the company’s algorithms and data practices work. And Twitter expanded its Civic Integrity Policy in October.

For its part, Salesforce created an Office of Ethical and Humane Use of Technology, which covers “product, law, policy and ethics to develop and implement a strategic framework for the ethical and humane use of technology” across the company. This Office’s Advisory Council brings together individuals with diverse perspectives — including Salesforce executives and employees, academics, industry experts and society leaders — to consider the impact of their technology on society. Among other things, its goal is to reduce misinformation and threats to democracy.

Governance efforts grow globally

Meanwhile, the European Union’s revised Audiovisual Media Services Directive (AVMSD) governs the coordination of national legislation on all audiovisual media, including TV broadcasts and on-demand services.

Ireland recently introduced the Online Safety and Media Regulation bill to handle complaints from users. The proposed legislation, which includes safety codes that outline how online video-sharing services will deal with harmful content, will enable the Online Safety Commissioner to regulate online content and apply sanctions for non-compliance.

Today’s content is far more abundant, diverse and divisive than ever before. This places platforms under pressure to handle reader complaints equitably, remove bias in algorithms, and promote transparency into processes and decisions.

Content moderation requires an effective governance and control framework. Determine whether your company has such a framework by asking the following questions:

  • What are our principles, and how do they relate to our content? Do these principles reflect our various stakeholders, including employees, shareholders and society?
  • How do we effectively enable governance, covering everything from principles to controls?
  • Do we have a governance framework that defines our organizational structure and lines of responsibility for online safety?
  • Is our framework for governance and control a good fit in a regulated environment?
  • Have we established responsibilities for risk management, audits and compliance assessments?
  • How will we review our third-party service providers’ governance processes?
  • Do we have an independent oversight and appeals mechanism for handling contested content?
  • Do we offer users advance notice, a fair hearing and an arbitrated resolution if things go wrong?
  • Do we have a plan for updating or changing our governance policies and procedures as new regulations are drafted?

Legislators call for new rules and industry standards

Some US legislators have called for industry standards to reduce the spread of misinformation, disinformation and synthetic content such as deep fakes. And the US Congress is considering making changes to Section 230 legislation, which shields internet companies from liability for third-party content. One provision says these platforms are not liable if they make good-faith efforts to moderate objectionable content. In October 2020, the Federal Communications Commission announced that more clarity on Section 230 would be forthcoming.

Meanwhile, lawmakers continue to hold hearings in an effort to better understand the situation before making changes. They are divided about whether Section 230 allows tech companies to avoid doing enough to moderate content or requires them to do too much moderation. In response, the platforms have pointed out that it would be almost impossible for them to operate if they could be sued for either posting or deleting too much content.

Despite these challenges, digital platforms have a chance to bring together consumers, industry and government agencies to develop regulations that would benefit all stakeholders. By taking such a proactive approach, companies could create positive change.

The financial industry is a prime example of groups that worked together to initiate needed changes. Major card issuers and banks formed the Payment Card Industry (PCI), which created the Data Security Standard (PCI DSS) that enables uniform global standards. These protocols allow consumers to make secure, seamless electronic payments anywhere in the world. And EMVCo — developed jointly by Europay, MasterCard and Visa — lets the payments industry enforce global acceptance and interoperability.

AI can help — when it’s managed well

As the volume of user-generated content continues to skyrocket past the scope of human content moderators, many companies are turning to AI-based technologies for help. For example, algorithms that use machine learning can decipher what content is most likely to engage a specific user and then serve that person content in line with those preferences. But this can lead to so-called filter, or content, bubbles, which can exacerbate polarization and divisiveness.

In contrast, other applications of AI can support transparency and build trust — especially when humans are in the loop. But algorithms have to be designed, built, implemented, managed and either updated or retired to ensure top performance and accuracy.

AI models also have to be continually monitored and regularly updated, as they can degrade after a few months — particularly in situations such as news, which is constantly changing. As a result, building trust in content moderation systems requires building trust in the AI that supports them.

Tools like PwC’s AI Risk Confidence Framework — which provides guidance and controls over the AI lifecycle from end to end — supports active, always-on monitoring of AI. In addition, PwC’s Bias Analyzer provides transparency into how current AI models are performing in terms of biased outcomes. This tool can help a company support AI ethics and bias standards and monitor to see that policies are continually achieved.

AI models also have to be continually monitored and regularly updated, as they can degrade after a few months — particularly in situations such as news, which is constantly changing. As a result, building trust in content moderation systems requires building trust in the AI that supports them.

As the adoption of AI for multiple uses — including content moderation — continues to accelerate, companies should ask some key questions to help them identify problem areas and achieve confidence in their AI systems. These include:

1. Do you have an inventory of all AI and modeling occurring in your company?

2. Where is it being used? With what stakeholders?

3. Are you creating models internally or using third-party vendor models?

4. Which team has governance and oversight over internal AI or third-party vendors that provide AI?

5. What data is being used or created, where is it stored and how is it protected?

6. Where is AI making transformations to your data? How is that process managed?

7. Who owns the strategy for modeling and AI used in your organization?

8. What controls are in place for AI access, usage and modification?

9. What is the process for testing, production and deployment of AI?

10. What is the plan regarding:

a. AI investment

b. Digitization and the automation journey

c. Performing internal audits on algorithms and AI in business processes to address content moderation challenges?

 

How to deal with content moderation challenges

Ultimately, effective content moderation and process transparency are essential to help avoid overregulation and consumer backlash, as well as potential reputational damage and revenue loss. Though some platforms are addressing content moderation challenges, others are working out where and how to start. Follow these steps to get on the path to effective content moderation:

  • Review (or develop, if necessary) your company’s values, purpose and principles. The Responsible AI Toolkit asks: What are your ethics? Are they embedded in your business processes and policies?
  • Use your values as the foundation for your content moderation policies, guidelines and governance framework.
  • Keep up to date with relevant changing regulations and how they could impact your content moderation policies. PwC’s Policy and Regulatory Intelligence solution allows you to monitor existing and emerging regulations and work across the three lines of defense (sales, product and engineering; compliance; and internal audit) to adapt your operations to the regulatory environment.
  • Build cross-lines-of-defense teams and charge them with developing — and owning — your content moderation platform and policies.
  • Consider forming an industry consortium to help develop policies, rules and potential regulations.
  • Have a clearly articulated AI risk strategy. If you don’t, craft a plan to begin. AI Risk Confidence can help.
  • Vet and validate your policies and rules with third parties.
  • Ensure that any algorithms used in content moderation are unbiased and easily explainable.
  • Set up governance, such as an oversight board, for evaluating and enforcing your policies and rules. This body could also evaluate and adjudicate any user complaints.
  • Designate accountability throughout your organization. Content governance requires the participation of multiple departments.
  • Create transparency with all stakeholders by making your policies, rules, and algorithms available and understandable.
  • Update your policies and rules on a regular basis and/or when events, consumers or governments take actions that generate the need for updates.

Being a moderator of facts presents challenges for digital platforms, which need to make a good-faith effort to keep their sites free from misinformation, disinformation, hate speech, fake news and fabricated content. But they don’t have to go it alone. Now is the time for companies to work together, using all the tools available to build — and maintain — trust.

Contact us

Mark McCaffrey

US Technology, Media and Telecommunications Leader, PwC US

Emmanuelle Rivet

Partner, US Technology Sector, PwC US

Jennifer Lendler

Risk Assurance Managing Director – Emerging Technology & Innovation, PwC US

Rahul Kapoor

Director, PwC US

Follow us

Required fields are marked with an asterisk(*)

By submitting your email address, you acknowledge that you have read the Privacy Statement and that you consent to our processing data in accordance with the Privacy Statement (including international transfers). If you change your mind at any time about wishing to receive the information from us, you can send us an email message using the Contact Us page.

Hide