Trust under pressure: 4 risks reshaping industry safeguards

  • Publication
  • 7 minute read
  • July 17, 2025

In PwC’s 2025 Trust and Safety Survey, nearly one in three respondents say they have low or very low trust in online platforms — that’s more than double the rate of distrust among those outside the traditional technology, media and telecom sector, where only one in eight feel the same. However, as organizations across nearly all industries dive head-first into AI and digital transformations, they're running into new risks that could undermine the trust they've built with consumers. Right now, many don't have the guardrails or experience to handle these evolving threats — and the ripple effects are being felt across entire companies and industries.

As they work to make technology safer, organizations across industries face four key risk areas they need to address head-on: 

  • AI safety: The application of AI across products and operations creates new avenues for organizational risk, including algorithmic bias, data privacy and misuse, misinformation and deep fakes, and cybersecurity. While many of these risks are not new, the growing autonomy of AI systems amplifies their complexity, scale and impact, posing fresh safety challenges for organizations.
  • Online safety: As organizations push for customer engagement through digitization of experiences, they face new threats of online harm, including fraud, misinformation and harmful content (e.g., hate speech, graphic violence, nudity). Stakes are especially high for youth-focused products and features, where risks like cyberbullying, online exploitation and data misuse are drawing increased scrutiny from parents, regulators and lawmakers.
  • Data privacy and protection: When building AI and online experiences, organizations often must handle huge amounts of sensitive, high-risk data — and do it responsibly. Users increasingly demand transparency, control and accountability — making strong data privacy practices essential for building trust, minimizing risk and protecting brand reputation in a data-driven world.
  • Regulation and transparency: AI-driven transformations are putting organizations into the middle of a growing patchwork of global regulations focused on customer protection and user safety — creating serious compliance challenges.

The EU’s Digital Services Act now requires online marketplaces to fight the spread of illegal goods by verifying sellers and making it easy to identify who’s behind each sale.

Industry spotlights: Where trust and safety risks proliferate

As organizations pursue digital transformation and AI innovation, they encounter a wide array of trust and safety risks. These risks differ across industries, driven by the technologies in use, how users interact with them, and the types of sensitive data they process. The chart below highlights where core T&S risks are most prevalent, helping organizations anticipate exposure and prioritize safeguards. 

  • Retail: Retailers are increasingly adopting immersive, AI-powered shopping experiences, such as augmented reality (AR) apparel try-ons and influencer-driven, user-generated content (UGC). While these advancements enhance customer engagement, they also introduce challenges related to content moderation, AI bias and data governance.
  • Media and entertainment: Online platforms and virtual worlds face scrutiny over user-generated content, especially when it involves minors. Our survey found this sentiment to be particularly clear in gaming, where 15% of US and 19% of UK respondents said they have very low trust in the industry — the lowest levels of trust observed across all sectors (PwC’s 2025 Trust and Safety Survey). In response, regulation is increasing for media and entertainment, targeting areas including online gambling and metaverse safety.
  • Financial services: Consumers view finance as having the best cybersecurity measures in place to protect users (PwC’s 2025 Trust and Safety Survey). But as tech-driven enhancements grow exponentially, there’s room for improvement. AI is transforming credit scoring, loan approvals and even the delivery of investment advice, but risks of algorithmic bias (e.g., profiling) and lack of transparency persist. With heavy regulatory oversight already in place, data privacy and Responsible AI design are paramount.
  • Health and wellness: Health systems use AI for care delivery and administration, but bias in decision-making software has real-world consequences. In fact, researchers discovered bias in widely used software that helps decide who gets into high-risk health care programs — directly affecting the care patients receive. Data privacy breaches can disrupt treatment and erode trust.
  • Telecommunications: Telecom firms are both using and securing against AI. As 5G and the Internet of Things (IoT) scale, using AI to improve network performance and serving as a critical enabler to end users’ engagement with AI platforms, they face compound threats from data breaches, impersonation fraud and deepfake scams — despite deploying AI-enhanced security tools.

T&S considerations by industry and risk type

Industry AI safety Online safety Data privacy and protection Regulation and transparency
Retail Personalization tools risk bias, misuse of consumer data UGC presents exposure to misinformation, harassment, brand risk Collection of biometric/facial data via AR requires strong safeguards Growing pressure for transparency in influencer content and AI use
Media and Entertainment AI in content moderation and gaming experiences needs oversight Child safety risks in gaming environments; exposure to explicit content UGC platforms handling sensitive user data require improved governance Regulators targeting youth exposure to gambling, violence, and harmful content
Telecommunications AI can be used in fraud detection, but it also introduces new vulnerabilities Robocalls, deepfakes, and impersonation scams on the rise IoT and 5G expansion increase attack surface and data exposure FCC actions against AI robocalls and voice-based fraud reflect rising regulatory scrutiny
Financial services Algorithmic bias in credit scoring, loan approvals Phishing and fraud targeting consumers via digital channels High expectations for cybersecurity and compliance with privacy regulations Tightening global rules (GDPR, US data laws) demand transparency and auditability
Healthcare AI in diagnosis and triage can unintentionally encode bias. Low exposure but emerging risks in patient-facing digital experiences Ransomware and data breaches have direct patient impact. Regulators scrutinizing AI in healthcare delivery and requiring privacy-by-design practices

What’s next? How to jump-start preparations

For industries embracing digital and AI transformation, building trust and safety isn’t just a safeguard — it’s a competitive edge. As technology evolves rapidly, regulations tighten and consumers grow more safety-conscious, strong trust and safety practices have become essential. They can unlock real value — faster product launches, less downtime and fewer regulatory fines (see Demonstrating value in trust and safety: Assessing return on investments).

To build robust T&S capabilities, organizations can start with four key steps: 

  • Recognize high-priority threats. Focus on risks with the greatest organizational exposure, where proactive risk management can deliver the most impact.
  • Conduct cross-functional risk assessments. Bring teams together to evaluate potential misuse cases, edge scenarios and user impacts, and define mitigation strategies.
  • Set risk tolerance levels. Identify acceptable and unacceptable levels of risk across the organization.
  • Select pilot use cases. Identify initial use cases to introduce robust T&S capabilities.
  • Measure impact and build alignment. Define T&S success metrics, track performance indicators and get stakeholder buy-in before deployment.
  • Design with safety in mind from the start. Integrate T&S principles like AI safety and data privacy into early product design decisions to avoid costly retrofits down the line.
  • Incorporate risk assessments into the product life cycle. Embed T&S risk assessments into roadmap planning, design sprints and prelaunch checklists to help identify and address risks early.
  • Define a clear policy framework. Develop policies that outline prohibited and acceptable use for digital and AI tools and experiences, with guidance for navigating gray areas.
  • Develop enforcement guidelines. Create structured protocols to detect, triage and respond to policy violations.
  • Deploy monitoring and detection systems. Implement tools to detect harmful behavior in real time, enforce policies at scale and track performance.
  • Review and refine regularly. Conduct periodic reviews of enforcement data and outcomes to improve policies, safety protocols and operational effectiveness.
  • Monitor evolving regulations. Track regulatory changes across jurisdictions to anticipate compliance risk and potential business impacts.
  • Establish proactive scanning capability. Create a system or dedicated team to monitor regulatory trends, technological shifts and emerging harms to enable early risk detection.

By taking proactive steps now, businesses can build lasting consumer trust, stay ahead of emerging risks and regulatory shifts, and accelerate responsible innovation. In a digital economy where trust drives growth, leaders won’t wait for regulation — they’ll help set the standard. Start now. Set the pace.

Trust and Safety Outlook 2025

Follow us

Required fields are marked with an asterisk(*)

Your personal information will be handled in accordance with our Privacy Statement. You can update your communication preferences at any time by clicking the unsubscribe link in a PwC email or by submitting a request as outlined in our Privacy Statement.

Hide