The Next Move

Youth online safety: turning risk into trust

  • May 05, 2026

The issue

Children’s online safety and privacy has rapidly emerged as a global policy priority. Governments, advocacy groups, and the public are coalescing around the need to safeguard minors from a wide range of digital harms, from exploitation and cyberbullying to manipulative design and unsafe AI interactions. The Trump administration’s AI policy framework is just one recent example of this growing momentum. Broad consensus around the need to strengthen children’s online safety is driving regulatory changes across jurisdictions, reshaping expectations for how companies collect, use, and secure youth data.

Yet many emerging requirements are proving difficult to operationalize, and in some cases may unintentionally increase risk. For example, strict age verification and parental consent mechanisms can require the collection of additional sensitive data (e.g., biometrics), introduce privacy trade-offs, or fail in practice due to unverifiable consent or exclusion of vulnerable youth populations. Similarly, prescriptive rules may lag behind evolving technologies such as AI-driven content and advertising, leaving gaps in addressing real-world harms.

To navigate this complexity, companies should move beyond a compliance-first mindset and establish future-proof standards for youth data protection. This requires adopting a broader trust and safety approach, grounded in understanding the harms that drive policymaking, and embedding proactive, risk-based safeguards into products, data practices, and governance.

The regulators' take

Regulators globally are converging around a core principle: companies should proactively identify and mitigate risks to youth privacy and safety, rather than reacting after harm occurs. This shift is evident in both the breadth of regulatory activity and the increasing specificity of expectations across jurisdictions.

A fragmented but converging regulatory landscape. Despite jurisdictional differences, regulators are consistently targeting similar risk areas:

  • Excessive data collection and retention
  • Behavioral profiling and targeted advertising
  • Manipulative design
  • Inadequate age assurance mechanisms
  • Unsafe AI and chatbot interactions
  • AI-generated deepfakes of minors

While approaches differ, several common themes are emerging across major markets. Below is a partial list of key policy initiatives.

Jurisdiction Key regulations Core requirements Implications for companies
Australia Privacy and Other Legislation Amendment Act 2024 Requires the Office of the Australian Information Commissioner (OAIC) to develop a Children’s Online Privacy Code by December 10, 2026 Covered online services likely to be accessed by children should monitor for developments related to the Children’s Online Privacy Code
Online Safety Amendment (Social Media Minimum Age) Act 2024 Requires “age-restricted social media platforms” to take steps to prevent users aged 15 years and under from having accounts Requires development of age verification methods that don’t involve collecting government IDs
EU Digital Services Act (DSA) Strengthens accountability for online platforms, requiring them to take greater responsibility for harmful content that affects minors. It requires specific risk assessments for systemic threats, such as algorithmic amplification of harmful material. Platforms should operationalize continuous risk assessments and strengthen content governance, especially for minors
AI Act (application of rules on high-risk systems temporarily delayed) Imposes heightened requirements around technical design documentation, record-keeping, transparency, human oversight, and observability requirements on high-risk AI systems (e.g., biometric identification and age verification systems) AI systems used for youth safety (e.g., age assurance) are required to meet stringent risk management, governance and auditability standards
UK Online Safety Act (OSA) Requires online platforms to proactively mitigate risks to children rather than responding after harm occurs, e.g., through age-appropriate design, rigorous content filtering, and active moderation of harmful interactions. Shifts burden to companies to prevent harm before it occurs, requiring continuous monitoring and intervention
Age-Appropriate Design Code (AADC) Requires strict default privacy settings for children, limiting data collection unless required for service operation, and provides guidance on user age verification Forces redesign of user experiences to align with privacy-by-default and data minimization principles
US (federal) National Policy Framework for Artificial Intelligence Calls on Congress to require AI services and platforms to protect children with robust tools, safety features, age-assurance requirements, and limits on data collection for model training and targeted advertising AI services should prepare for a sector-specific regulatory approach by forming trade groups to develop industry standards
TAKE IT DOWN Act Requires online platforms to remove intimate deepfakes of minors and nonconsenting adults within 48 hours of a valid request, effective as of May 19, 2026 Platforms should develop rapid response intake, validation, escalation, and response workflows to comply with requests
COPPA 2.0 (proposed) Extends protections to minors under 17, bans behavioral advertising directed at children, and expands personal information (PI) definitions to include biometric and geolocation data Signals tighter restrictions on monetization strategies and expanded compliance scope
US (states) California Age-Appropriate Design Code (temporarily halted by legal challenges) Implements strict data protection measures, including data protection impact assessments (DPIA) requirements, high default privacy settings, and minimized data collection Highlights regulatory direction despite legal uncertainty; companies should prepare for similar requirements
Oregon HB 2008 Prohibits data controllers from processing personal data for targeted advertising or selling personal data when the controller has or disregards actual knowledge that a consumer is under 16 years old Requires stricter controls on data monetization and profiling
Vermont Age-Appropriate Design Code Act Imposes design and data protections for children, including restrictions on profiling, mandated privacy-by-default settings, and limits on data collection and use, though the law now faces constitutional challenges from the tech industry over potential First Amendment violations Reinforces trend toward design-based regulation, though litigation risk remains
Nebraska Age-Appropriate Online Design Code Act Requires covered online services to “exercise reasonable care” in safeguarding user data and in designing and operating their platforms to prevent harms such as compulsive use, severe emotional distress, identity theft, and significant psychological harm Expands risk scope beyond privacy to psychological and behavioral harms
Texas App Store Accountability Act (temporarily halted by legal challenges) Requires mobile app stores and developers to verify users’ ages, obtain parental consent for each instance of an app download, purchase, or in-app transaction for all minors, and display age ratings and content descriptors for each app. One of the first state laws to regulate app stores directly, though the law is currently facing a First Amendment challenge. Introduces platform-level accountability and operational complexity for app ecosystems
South Carolina Age-Appropriate Design Code Act (HB 3431) Imposes broad privacy and safety obligations on online services likely accessed by minors, including strict data minimization requirements, mandatory safety tools, and a new duty of care standard to prevent harm to children Covered online services are required to reduce data collection, exercise a duty of care to prevent harm to minors, provide opt-out tools for addictive design features, and submit to annual third-party audits, with treble damages and personal liability for violations

Enforcement is costly and accelerating. Regulatory scrutiny is no longer theoretical. Enforcement actions in the United States, European Union, and United Kingdom, for example, have cited failures such as:

  • Collecting data from children under 13 without verifiable parental consent
  • Mislabeling adult content as child appropriate
  • Unauthorized processing of children’s data
  • Excessive data collection, storage, and retention
  • Insufficient notice around data collection practices
  • Failure to implement default privacy settings that protect child users
  • Use of dark patterns that encourage unintended purchases by children

Penalties for these failures include:

  • Multimillion-dollar (in some cases nine-figure) fines
  • Mandatory deletion of improperly collected data
  • Imposition of holistic privacy programs and regular, independent audits
  • Restrictions on product features (e.g., disabling communications for minors)

Age-verification and gating methods fall short. Implementing effective mechanisms here remains a significant operational and legal challenge. While regulators are intensifying enforcement and raising penalties, they have yet to provide clear, standardized guidance on what constitutes valid or verifiable compliance. Parental consent, widely treated as a core safeguard, illustrates this gap, demonstrating that there’s no consensus on how platforms can reliably verify a parent’s identity or confirm they meaningfully understand what they’re consenting to. Broader age-assurance methods face the same ambiguity.

Without uniform standards, companies are left to operationalize compliance with limited guidance, increasing legal exposure and undermining confidence in how responsibility will ultimately be assessed.

Litigation is also reshaping the risk landscape. Beyond regulators, private rights of action are introducing a new and less predictable layer of exposure, reshaping how companies may be held responsible for youth online harms. As civil litigation expands, questions of liability are increasingly driven by courts and juries, creating uncertainty around accountability for platform design and expected duty of care while compounding the risk of financial penalties.

Industry self-regulation efforts gain momentum. In response to rising concerns over online child exploitation and mounting regulatory pressure, various industry groups have developed leading practices and open-source tools to help companies safeguard children. Efforts to date include:

  • Development of a global framework for ethical youth data processing
  • Funding for research, policy development, and technical standards that promote leading practices for privacy, abuse prevention, and transparency reporting
  • Advocacy for privacy-by-design, age-appropriate design principles, and transparency standards that emphasize the rights and developmental needs of minors
  • Participation in cross-platform safety initiatives that facilitate information-sharing between companies to enhance safety
  • Development of AI-powered content moderation that can be deployed to platforms by API to detect and block inappropriate audio content

Whatever safeguards you choose to adopt, they should be flexible, mitigate potential harms to minors, and build trust with users, parents and guardians, and regulators. The scope of safeguards you’ll need will depend on where you operate, the features and services you provide, the likelihood that children are using those services, and the potential for harm.

PwC’s Youth Online Privacy and Safety Framework provides structure and flexibility to help your organization balance protection, innovation, and accountability as expectations evolve.

Your next move

The path forward is less about tracking every regulatory change and more about building a resilient, principles-driven operating model that can adapt as expectations evolve. Consider taking these steps:

  1. Shift from compliance-centric to harms-based risk model. Anchor your approach in the specific harms regulators are trying to prevent such as exploitation, manipulation, and unsafe AI interactions. Conduct targeted risk assessments across products, features, and data flows to identify where youth users may be exposed.
  2. Evaluate downstream impacts of emerging duty‑of‑care and design accountability expectations. Assess whether recent regulatory and litigation developments affect your approach to younger user experiences, identity and access management strategy, and whether current account structures are designed to support both defensibility and scalability over time.
  3. Establish a “privacy and safety-by-default” architecture. Embed controls directly into product design and data practices, standardizing them across platforms for consistency and scalability:
    • Reduce data collection and prohibit unnecessary processing
    • Disable profiling and targeted advertising for youth users
    • Default accounts to private, non-identifiable settings
    • Build guardrails against manipulative design
  4. Implement strong governance for youth data. Develop a clear, enterprise-wide understanding of your youth data footprint to enable both compliance and rapid response to regulatory change. Inventory and classify youth data across systems. Implement tagging frameworks to segregate youth data. Enforce strict data retention and deletion policies. Restrict third-party sharing to essential use cases.
  5. Bolster oversight, accountability, and transparency. Elevate youth safety to a board-level priority by establishing clear ownership and governance structures, implementing centralized logging and monitoring of data access, tracking safety metrics (e.g., incidents, complaints, exposure risks), and providing transparent disclosures and user controls. Regular reporting to leadership and regulators will be critical for demonstrating accountability.
  6. Operationalize AI and content safety controls. As AI-driven experiences expand, implement:
    • Independent safety audits and red teaming of AI systems
    • Continuous monitoring for harmful outputs or model drift
    • Clear documentation of model behavior and data usage
    • Safeguards against emotionally manipulative or unsafe interactions
  7. Develop an agile regulatory strategy. Map global regulatory requirements to a unified control framework. Engage with regulators and industry groups to anticipate changes. Invest in privacy-enhancing technologies (e.g., anonymization, encryption). Build flexibility into your systems to adapt to shifting expectations.
Follow us