{{item.title}}
{{item.text}}
{{item.text}}
By Rohan Sen, Sara Putnam, Jake Meek, and Phillip Gao
On November 19, 2025, the European Commission (EC) published its “Digital Omnibus,” a 3-part proposal designed to amend and streamline EU laws governing data, cybersecurity, privacy, and artificial intelligence. The proposal responds to a 2024 report on EU competitiveness, which found these legal regimes were too fragmented, complex, and burdensome for industry. The Digital Omnibus on AI, the focus of this article, calls for regulatory relief from key provisions of the EU’s AI Act.
Specifically, the AI amendments seek to reduce compliance burdens by extending timelines for key requirements, eliminating certain obligations, simplifying compliance for smaller enterprises, and more. But with this action comes some uncertainty. The Omnibus is a proposal only, and the changes it puts forth may evolve during the legislative process as they’re debated by the EU Council and Parliament.
Given the uncertainty, affected businesses should prepare by developing an agile, regulation-agnostic governance approach, one focused on core principles and standards harmonized across key regulatory frameworks.
The EU Digital Omnibus proposal is a response to growing concerns that the EU’s digital laws have become too fragmented and complex, increasing administrative burdens for organizations and potentially stifling innovation, concerns raised in the Draghi report on European competitiveness, published in September 2024 (Part A, Part B). The Omnibus consists of three components: the Digital Omnibus (amending data, privacy, and cyber rules such as GDPR and the Data Act), the Digital Omnibus on AI (amending the AI Act), and a proposal to establish European Business Wallets.
As for the AI-related proposals, key provisions include:
Below are the details of what’s being proposed and what it might mean for organizations subject to the EU AI Act.
| Topic | Proposed changes | Potential impact |
|---|---|---|
| HRAIS timelines | Compliance deadlines for HRAIS requirements would depend on a decision from the EC regarding availability of compliance support tools, i.e., technical standards and guidelines (currently being drafted by standards bodies). For HRAIS in high-risk contexts such as critical infrastructure, education, and law enforcement, obligations would apply six months after the EC decision on readiness of standards. If standards are delayed, the latest deadline (as a backstop) is December 2, 2027 (or 16 months later than the original deadline of August 2, 2026). For HRAIS intended as safety component covered by regulations in Annex I, obligations would apply 12 months after the EC decision on readiness of standards. If standards are delayed, the latest deadline (as a backstop) is August 2, 2028 (or 12 months later than the original deadline of August 2, 2027). |
Compliance timeline is delayed and now dependent on the availability of technical standards. The EC has implicitly injected pressure into the legislative process, given that HRAIS requirements are set to take effect in August 2026. This introduces potential legal uncertainty: if proposed changes aren’t adopted by then, it’s unclear whether HRAIS obligations would technically become enforceable before the timeline extension can be implemented. |
| GenAI watermarking | Proposes a 6-month transitional period for providers of GenAI systems subject to watermarking requirements. | Systems on the market before August 2, 2026, would have an additional six months (till February 2, 2027) to implement necessary technical solutions to mark GenAI content. |
| Processing of personal data | Proposes that the processing of personal data could be done on the basis of “legitimate interests” in the “context of the development and operation of an AI system,” unless national law requires consent. “Legitimate interests” is a legal basis for processing personal data outlined in the GDPR and requires a documented balancing test between the interests of the data controller and the data subject’s interests, rights, and freedoms. Data subjects would have the “unconditional right to object” to their data being processed. |
The proposal’s language is broad, covering both “development and operation” of AI systems. This creates a formal basis for processing personal data (e.g., when training large-scale AI models) that wasn’t previously codified. Regulators will likely scrutinize the use of this data closely. Maintaining robust and defensible documentation showing a legitimate interest and outlining technical and organizational safeguards is therefore critical. Open questions:
|
| Definition of “personal data” | The definition of “personal data” would be amended in an attempt to codify a recent EU Court of Justice ruling. Under this definition, data isn’t considered personal if the entity using it lacks “means reasonably likely to be used” to identify the person associated with the information. |
This would introduce a more subjective, context-specific definition of personal data based on the “means” the data controller can use to identify the person tied to the data. Pseudonymized data could be used for AI development and training and wouldn’t be subject to GDPR if the controller can’t reasonably re-identify the subject. Open question: How will entities prove they don’t have “reasonable means” to re-identify the subject? |
| Processing of sensitive data | Processing special categories of data for developing or operating AI systems would be allowed provided: (a) appropriate measures are used to minimize collection and processing; (b) if special category data is identified in datasets used for training, testing, or validation, or “in the AI system or AI model,” the data is deleted; and (c) if deletion requires “disproportionate effort,” the controller would have to prevent the data from being disclosed or used to produce outputs. Specifies a new legal basis for processing special categories of personal data for addressing bias in AI, provided: (a) bias detection and correction can’t be done with synthetic or anonymized data; (b) special category data are subject to limitations on re-use and privacy measures (e.g., pseudonymization); (c) data are protected by safeguards; (d) data aren’t transferred or accessed by other parties; (e) data is deleted once bias has been corrected; and (f) robust documentation of why processing was necessary is maintained. |
Processing sensitive or special category data could happen only under certain conditions. Regulators would likely scrutinize this area closely, particularly for HRAIS. Maintaining robust and defensible documentation of legal rationales and the technical and organizational safeguards would be essential. Open questions:
|
| Smaller enterprises | Adds a new entity category, “small mid-cap enterprises” (SMCs), alongside “micro, small and medium-sized enterprises” (SMEs) in the EU AI Act. These designations are based on an organization’s headcount, revenue, and balance sheet. Simplified technical documentation requirements would apply to SMC/SME providers of HRAIS. Proportionate quality management systems (QMS) obligations would be calibrated to the organization’s size, with adjustments for SMEs and SMCs while preserving the required level of protection. The simplified QMS option would expand to all SMEs. National authorities would have to provide guidance and advice to SMCs, not just to SMEs. |
Proportionality tweaks and simplified QMS pathways would let SMEs and SMCs stand up AI Act-compliant programs without having to replicate the governance structures of large multinationals. Open questions:
|
| Regulatory sandbox and testing | The AI Office would have authority to create a centralized EU-level regulatory sandbox for AI systems, and national regulatory sandboxes will be designed to facilitate cooperation between member states, enhancing cross-border collaboration. The scope for testing HRAIS outside of sandboxes would expand. A new legal framework would enable member states and the EC to form voluntary agreements for real-world testing of HRAIS. |
With the expanded scope of sandboxes and real-world testing, AI systems could be tested under supervision before launch, enabling compliance adjustments without immediate risk of penalties. |
| AI literacy | The AI Act requires providers and deployers to confirm their workforces are AI literate, but the Omnibus changes this mandate to a recommendation. AI literacy obligations would shift to the EC and member states, who in turn should “encourage” providers and deployers to establish a sufficient level of AI literacy among staff and others responsible for AI systems. | Even if no longer mandated, AI literacy should be treated as a baseline element of AI governance and an operational necessity (e.g., sufficient literacy to provide necessary human oversight for HRAIS). This could also result in a patchwork of AI literacy standards or guidance, as member states may differ in what they consider “sufficient” AI literacy. |
| AI system registration | Providers would no longer have to register AI systems in the EU database if they have an assessment demonstrating the system isn’t high-risk due to its narrow or procedural tasks. | Compliance programs would need clear criteria, standardized assessment templates, and sign-offs to justify a decision not to register. Organizations should be prepared to explain why a system isn’t high risk, how it was evaluated, and under what conditions the classification would be revisited (e.g., scope creep, new features). |
| Post-market monitoring | The EC would provide guidance on a post-market monitoring plan, rather than a prescriptive template. | This would allow HRAIS providers to tailor their post-market monitoring plans to their organization, gaining flexibility to design monitoring plans that fit their risk profile, tech stack, and sector. |
| Regulatory oversight | Oversight would be streamlined across regulatory authorities. The AI Office would gain exclusive supervisory and enforcement powers over general-purpose AI (GPAI) models, AI systems built on GPAIs, and AI embedded in very large online platforms (VLOPs) or very large online search engines (VLOSEs). Authorities responsible for fundamental rights would have stronger powers and closer cooperation with market surveillance authorities, including access to relevant information. Conformity assessment bodies could use a single application and assessment (“one stop shop”). |
More centralized oversight from the AI Office would likely create more focused scrutiny on large AI developers but also provides industry a more singular point for streamlined engagement with the EU. Documentation would need to be audit-ready from multiple lenses due to closer coordination between regulators. Because authorities protecting rights can now access information via market surveillance authorities, providers will need more defensible documentation on how AI systems impact rights. |
The Omnibus’ AI proposals may change in the coming months, creating a temptation to delay implementation until the dust settles. But don’t pump the brakes on compliance. Here’s how to prepare, despite the uncertainty.
{{item.text}}
{{item.text}}