{{item.title}}
{{item.text}}
{{item.text}}
By Manuj Lal, Brett Croker, and Jake Meek
Children’s online safety and privacy has rapidly emerged as a global policy priority. Governments, advocacy groups, and the public are coalescing around the need to safeguard minors from a wide range of digital harms, from exploitation and cyberbullying to manipulative design and unsafe AI interactions. The Trump administration’s AI policy framework is just one recent example of this growing momentum. Broad consensus around the need to strengthen children’s online safety is driving regulatory changes across jurisdictions, reshaping expectations for how companies collect, use, and secure youth data.
Yet many emerging requirements are proving difficult to operationalize, and in some cases may unintentionally increase risk. For example, strict age verification and parental consent mechanisms can require the collection of additional sensitive data (e.g., biometrics), introduce privacy trade-offs, or fail in practice due to unverifiable consent or exclusion of vulnerable youth populations. Similarly, prescriptive rules may lag behind evolving technologies such as AI-driven content and advertising, leaving gaps in addressing real-world harms.
To navigate this complexity, companies should move beyond a compliance-first mindset and establish future-proof standards for youth data protection. This requires adopting a broader trust and safety approach, grounded in understanding the harms that drive policymaking, and embedding proactive, risk-based safeguards into products, data practices, and governance.
Regulators globally are converging around a core principle: companies should proactively identify and mitigate risks to youth privacy and safety, rather than reacting after harm occurs. This shift is evident in both the breadth of regulatory activity and the increasing specificity of expectations across jurisdictions.
A fragmented but converging regulatory landscape. Despite jurisdictional differences, regulators are consistently targeting similar risk areas:
While approaches differ, several common themes are emerging across major markets. Below is a partial list of key policy initiatives.
| Jurisdiction | Key regulations | Core requirements | Implications for companies |
| Australia | Privacy and Other Legislation Amendment Act 2024 | Requires the Office of the Australian Information Commissioner (OAIC) to develop a Children’s Online Privacy Code by December 10, 2026 | Covered online services likely to be accessed by children should monitor for developments related to the Children’s Online Privacy Code |
| Online Safety Amendment (Social Media Minimum Age) Act 2024 | Requires “age-restricted social media platforms” to take steps to prevent users aged 15 years and under from having accounts | Requires development of age verification methods that don’t involve collecting government IDs | |
| EU | Digital Services Act (DSA) | Strengthens accountability for online platforms, requiring them to take greater responsibility for harmful content that affects minors. It requires specific risk assessments for systemic threats, such as algorithmic amplification of harmful material. | Platforms should operationalize continuous risk assessments and strengthen content governance, especially for minors |
| AI Act (application of rules on high-risk systems temporarily delayed) | Imposes heightened requirements around technical design documentation, record-keeping, transparency, human oversight, and observability requirements on high-risk AI systems (e.g., biometric identification and age verification systems) | AI systems used for youth safety (e.g., age assurance) are required to meet stringent risk management, governance and auditability standards | |
| UK | Online Safety Act (OSA) | Requires online platforms to proactively mitigate risks to children rather than responding after harm occurs, e.g., through age-appropriate design, rigorous content filtering, and active moderation of harmful interactions. | Shifts burden to companies to prevent harm before it occurs, requiring continuous monitoring and intervention |
| Age-Appropriate Design Code (AADC) | Requires strict default privacy settings for children, limiting data collection unless required for service operation, and provides guidance on user age verification | Forces redesign of user experiences to align with privacy-by-default and data minimization principles | |
| US (federal) | National Policy Framework for Artificial Intelligence | Calls on Congress to require AI services and platforms to protect children with robust tools, safety features, age-assurance requirements, and limits on data collection for model training and targeted advertising | AI services should prepare for a sector-specific regulatory approach by forming trade groups to develop industry standards |
| TAKE IT DOWN Act | Requires online platforms to remove intimate deepfakes of minors and nonconsenting adults within 48 hours of a valid request, effective as of May 19, 2026 | Platforms should develop rapid response intake, validation, escalation, and response workflows to comply with requests | |
| COPPA 2.0 (proposed) | Extends protections to minors under 17, bans behavioral advertising directed at children, and expands personal information (PI) definitions to include biometric and geolocation data | Signals tighter restrictions on monetization strategies and expanded compliance scope | |
| US (states) | California Age-Appropriate Design Code (temporarily halted by legal challenges) | Implements strict data protection measures, including data protection impact assessments (DPIA) requirements, high default privacy settings, and minimized data collection | Highlights regulatory direction despite legal uncertainty; companies should prepare for similar requirements |
| Oregon HB 2008 | Prohibits data controllers from processing personal data for targeted advertising or selling personal data when the controller has or disregards actual knowledge that a consumer is under 16 years old | Requires stricter controls on data monetization and profiling | |
| Vermont Age-Appropriate Design Code Act | Imposes design and data protections for children, including restrictions on profiling, mandated privacy-by-default settings, and limits on data collection and use, though the law now faces constitutional challenges from the tech industry over potential First Amendment violations | Reinforces trend toward design-based regulation, though litigation risk remains | |
| Nebraska Age-Appropriate Online Design Code Act | Requires covered online services to “exercise reasonable care” in safeguarding user data and in designing and operating their platforms to prevent harms such as compulsive use, severe emotional distress, identity theft, and significant psychological harm | Expands risk scope beyond privacy to psychological and behavioral harms | |
| Texas App Store Accountability Act (temporarily halted by legal challenges) | Requires mobile app stores and developers to verify users’ ages, obtain parental consent for each instance of an app download, purchase, or in-app transaction for all minors, and display age ratings and content descriptors for each app. One of the first state laws to regulate app stores directly, though the law is currently facing a First Amendment challenge. | Introduces platform-level accountability and operational complexity for app ecosystems | |
| South Carolina Age-Appropriate Design Code Act (HB 3431) | Imposes broad privacy and safety obligations on online services likely accessed by minors, including strict data minimization requirements, mandatory safety tools, and a new duty of care standard to prevent harm to children | Covered online services are required to reduce data collection, exercise a duty of care to prevent harm to minors, provide opt-out tools for addictive design features, and submit to annual third-party audits, with treble damages and personal liability for violations |
Enforcement is costly and accelerating. Regulatory scrutiny is no longer theoretical. Enforcement actions in the United States, European Union, and United Kingdom, for example, have cited failures such as:
Penalties for these failures include:
Age-verification and gating methods fall short. Implementing effective mechanisms here remains a significant operational and legal challenge. While regulators are intensifying enforcement and raising penalties, they have yet to provide clear, standardized guidance on what constitutes valid or verifiable compliance. Parental consent, widely treated as a core safeguard, illustrates this gap, demonstrating that there’s no consensus on how platforms can reliably verify a parent’s identity or confirm they meaningfully understand what they’re consenting to. Broader age-assurance methods face the same ambiguity.
Without uniform standards, companies are left to operationalize compliance with limited guidance, increasing legal exposure and undermining confidence in how responsibility will ultimately be assessed.
Litigation is also reshaping the risk landscape. Beyond regulators, private rights of action are introducing a new and less predictable layer of exposure, reshaping how companies may be held responsible for youth online harms. As civil litigation expands, questions of liability are increasingly driven by courts and juries, creating uncertainty around accountability for platform design and expected duty of care while compounding the risk of financial penalties.
Industry self-regulation efforts gain momentum. In response to rising concerns over online child exploitation and mounting regulatory pressure, various industry groups have developed leading practices and open-source tools to help companies safeguard children. Efforts to date include:
Whatever safeguards you choose to adopt, they should be flexible, mitigate potential harms to minors, and build trust with users, parents and guardians, and regulators. The scope of safeguards you’ll need will depend on where you operate, the features and services you provide, the likelihood that children are using those services, and the potential for harm.
PwC’s Youth Online Privacy and Safety Framework provides structure and flexibility to help your organization balance protection, innovation, and accountability as expectations evolve.
The path forward is less about tracking every regulatory change and more about building a resilient, principles-driven operating model that can adapt as expectations evolve. Consider taking these steps:
{{item.text}}
{{item.text}}