EU’s Digital Omnibus offers AI regulatory relief, but questions remain

  • January 12, 2026

The issue

On November 19, 2025, the European Commission (EC) published its “Digital Omnibus,” a 3-part proposal designed to amend and streamline EU laws governing data, cybersecurity, privacy, and artificial intelligence. The proposal responds to a 2024 report on EU competitiveness, which found these legal regimes were too fragmented, complex, and burdensome for industry. The Digital Omnibus on AI, the focus of this article, calls for regulatory relief from key provisions of the EU’s AI Act.

Specifically, the AI amendments seek to reduce compliance burdens by extending timelines for key requirements, eliminating certain obligations, simplifying compliance for smaller enterprises, and more. But with this action comes some uncertainty. The Omnibus is a proposal only, and the changes it puts forth may evolve during the legislative process as they’re debated by the EU Council and Parliament.

Given the uncertainty, affected businesses should prepare by developing an agile, regulation-agnostic governance approach, one focused on core principles and standards harmonized across key regulatory frameworks.

The regulator’s take

The EU Digital Omnibus proposal is a response to growing concerns that the EU’s digital laws have become too fragmented and complex, increasing administrative burdens for organizations and potentially stifling innovation, concerns raised in the Draghi report on European competitiveness, published in September 2024 (Part A, Part B). The Omnibus consists of three components: the Digital Omnibus (amending data, privacy, and cyber rules such as GDPR and the Data Act), the Digital Omnibus on AI (amending the AI Act), and a proposal to establish European Business Wallets.

As for the AI-related proposals, key provisions include:

  • Extended timelines for high-risk AI systems (HRAIS) and generative AI (GenAI) watermarking
  • Reduced barriers to processing personal and sensitive data for model development and training
  • Simplified rules for smaller companies
  • Authorization of a regulatory sandbox and expanded testing
  • Relaxed requirements for AI literacy, high-risk system registration, and post-market monitoring
  • Streamlined oversight and coordination across regulatory authorities

Below are the details of what’s being proposed and what it might mean for organizations subject to the EU AI Act.

Key AI proposals and their potential impact

Topic Proposed changes Potential impact
HRAIS timelines

Compliance deadlines for HRAIS requirements would depend on a decision from the EC regarding availability of compliance support tools, i.e., technical standards and guidelines (currently being drafted by standards bodies).

For HRAIS in high-risk contexts such as critical infrastructure, education, and law enforcement, obligations would apply six months after the EC decision on readiness of standards. If standards are delayed, the latest deadline (as a backstop) is December 2, 2027 (or 16 months later than the original deadline of August 2, 2026).

For HRAIS intended as safety component covered by regulations in Annex I, obligations would apply 12 months after the EC decision on readiness of standards. If standards are delayed, the latest deadline (as a backstop) is August 2, 2028 (or 12 months later than the original deadline of August 2, 2027).

Compliance timeline is delayed and now dependent on the availability of technical standards.

The EC has implicitly injected pressure into the legislative process, given that HRAIS requirements are set to take effect in August 2026. This introduces potential legal uncertainty: if proposed changes aren’t adopted by then, it’s unclear whether HRAIS obligations would technically become enforceable before the timeline extension can be implemented.

GenAI watermarking Proposes a 6-month transitional period for providers of GenAI systems subject to watermarking requirements. Systems on the market before August 2, 2026, would have an additional six months (till February 2, 2027) to implement necessary technical solutions to mark GenAI content.
Processing of personal data

Proposes that the processing of personal data could be done on the basis of “legitimate interests” in the “context of the development and operation of an AI system,” unless national law requires consent.

“Legitimate interests” is a legal basis for processing personal data outlined in the GDPR and requires a documented balancing test between the interests of the data controller and the data subject’s interests, rights, and freedoms.

Data subjects would have the “unconditional right to object” to their data being processed.

The proposal’s language is broad, covering both “development and operation” of AI systems. This creates a formal basis for processing personal data (e.g., when training large-scale AI models) that wasn’t previously codified.

Regulators will likely scrutinize the use of this data closely. Maintaining robust and defensible documentation showing a legitimate interest and outlining technical and organizational safeguards is therefore critical.

Open questions:

  • How would legitimate interest be applied consistently across a range of scenarios involving AI system development and operation? Is there a scenario where processing of personal data isn’t permitted?
  • While data subjects could object, how will this be practically applied when training AI models, especially if training involves large swaths of web-scraped data?
  • Does legitimate interest cover web scraping or only data collected directly from users?
Definition of “personal data”

The definition of “personal data” would be amended in an attempt to codify a recent EU Court of Justice ruling.

Under this definition, data isn’t considered personal if the entity using it lacks “means reasonably likely to be used” to identify the person associated with the information.

This would introduce a more subjective, context-specific definition of personal data based on the “means” the data controller can use to identify the person tied to the data.

Pseudonymized data could be used for AI development and training and wouldn’t be subject to GDPR if the controller can’t reasonably re-identify the subject.

Open question: How will entities prove they don’t have “reasonable means” to re-identify the subject?

Processing of sensitive data

Processing special categories of data for developing or operating AI systems would be allowed provided: (a) appropriate measures are used to minimize collection and processing; (b) if special category data is identified in datasets used for training, testing, or validation, or “in the AI system or AI model,” the data is deleted; and (c) if deletion requires “disproportionate effort,” the controller would have to prevent the data from being disclosed or used to produce outputs.

Specifies a new legal basis for processing special categories of personal data for addressing bias in AI, provided: (a) bias detection and correction can’t be done with synthetic or anonymized data; (b) special category data are subject to limitations on re-use and privacy measures (e.g., pseudonymization); (c) data are protected by safeguards; (d) data aren’t transferred or accessed by other parties; (e) data is deleted once bias has been corrected; and (f) robust documentation of why processing was necessary is maintained.

Processing sensitive or special category data could happen only under certain conditions. Regulators would likely scrutinize this area closely, particularly for HRAIS. Maintaining robust and defensible documentation of legal rationales and the technical and organizational safeguards would be essential.

Open questions:

  • What does removal of special category data mean in the context of an AI model? If the model is already trained, simply removing the data from the training dataset may prevent the data from being processed in future model development, but that doesn’t impact the current model and its outputs.
  • The controller would have to prevent special category data from being used to produce outputs if removing this data from the training set requires “disproportionate effort.” Does this mean “untraining” the model since the data is already encoded into the parameters and weights? What if untraining or retraining a model requires “disproportionate effort”?
  • How does one prove that bias detection or correction can’t be done with synthetic or anonymized data?
Smaller enterprises

Adds a new entity category, “small mid-cap enterprises” (SMCs), alongside “micro, small and medium-sized enterprises” (SMEs) in the EU AI Act. These designations are based on an organization’s headcount, revenue, and balance sheet.

Simplified technical documentation requirements would apply to SMC/SME providers of HRAIS.

Proportionate quality management systems (QMS) obligations would be calibrated to the organization’s size, with adjustments for SMEs and SMCs while preserving the required level of protection. The simplified QMS option would expand to all SMEs.

National authorities would have to provide guidance and advice to SMCs, not just to SMEs.

Proportionality tweaks and simplified QMS pathways would let SMEs and SMCs stand up AI Act-compliant programs without having to replicate the governance structures of large multinationals.

Open questions:

  • If a fast-growing company crosses headcount, revenue, or balance sheet thresholds for SMEs and SMCs, how does that impact its obligations under the Act?
  • How would simplified rules for SMEs and SMCs apply in an M&A scenario (e.g., where a SME/SMC is acquired by a larger organization)?
Regulatory sandbox and testing

The AI Office would have authority to create a centralized EU-level regulatory sandbox for AI systems, and national regulatory sandboxes will be designed to facilitate cooperation between member states, enhancing cross-border collaboration.

The scope for testing HRAIS outside of sandboxes would expand.

A new legal framework would enable member states and the EC to form voluntary agreements for real-world testing of HRAIS.

With the expanded scope of sandboxes and real-world testing, AI systems could be tested under supervision before launch, enabling compliance adjustments without immediate risk of penalties.
AI literacy The AI Act requires providers and deployers to confirm their workforces are AI literate, but the Omnibus changes this mandate to a recommendation. AI literacy obligations would shift to the EC and member states, who in turn should “encourage” providers and deployers to establish a sufficient level of AI literacy among staff and others responsible for AI systems.

Even if no longer mandated, AI literacy should be treated as a baseline element of AI governance and an operational necessity (e.g., sufficient literacy to provide necessary human oversight for HRAIS).

This could also result in a patchwork of AI literacy standards or guidance, as member states may differ in what they consider “sufficient” AI literacy.

AI system registration Providers would no longer have to register AI systems in the EU database if they have an assessment demonstrating the system isn’t high-risk due to its narrow or procedural tasks.

Compliance programs would need clear criteria, standardized assessment templates, and sign-offs to justify a decision not to register.

Organizations should be prepared to explain why a system isn’t high risk, how it was evaluated, and under what conditions the classification would be revisited (e.g., scope creep, new features).

Post-market monitoring The EC would provide guidance on a post-market monitoring plan, rather than a prescriptive template. This would allow HRAIS providers to tailor their post-market monitoring plans to their organization, gaining flexibility to design monitoring plans that fit their risk profile, tech stack, and sector.
Regulatory oversight

Oversight would be streamlined across regulatory authorities. The AI Office would gain exclusive supervisory and enforcement powers over general-purpose AI (GPAI) models, AI systems built on GPAIs, and AI embedded in very large online platforms (VLOPs) or very large online search engines (VLOSEs).

Authorities responsible for fundamental rights would have stronger powers and closer cooperation with market surveillance authorities, including access to relevant information.

Conformity assessment bodies could use a single application and assessment (“one stop shop”).

More centralized oversight from the AI Office would likely create more focused scrutiny on large AI developers but also provides industry a more singular point for streamlined engagement with the EU.

Documentation would need to be audit-ready from multiple lenses due to closer coordination between regulators. Because authorities protecting rights can now access information via market surveillance authorities, providers will need more defensible documentation on how AI systems impact rights.

Your next move

The Omnibus’ AI proposals may change in the coming months, creating a temptation to delay implementation until the dust settles. But don’t pump the brakes on compliance. Here’s how to prepare, despite the uncertainty.

  1. Build a regulation-agnostic governance framework. A flexible framework focusing on core principles and standards harmonized across key regulatory frameworks can adapt to upcoming Omnibus changes. Design and implement a governance architecture that can absorb further adjustments without requiring a rebuild.
  2. Operationalize AI governance at scale. EU requirements on AI and privacy—wherever they land in the coming months and beyond—aren’t going away. The Omnibus presents an opportunity to integrate these workflows rather than adding another set of processes. Key questions to consider include:
    • Are our privacy, AI, model-risk, and security teams working from a single, integrated inventory of AI uses and data uses?
    • Do we have end-to-end workflows that connect use-case approval, data protection impact assessments, AI risk assessments, testing, monitoring, and incident management? Or are these functions still in silos?
  3. Strengthen documentation and controls, particularly for AI training data. Some elements of the proposal reduce administrative friction but raise the bar on justification and risk mitigation. It’s important to prepare to meet this standard. Key questions to consider include:
    • Have we documented and developed data lineage linking datasets for training, testing, or validation to specific versions of AI models and systems?
    • Have we implemented the appropriate privacy controls (e.g., consent) and privacy-preserving techniques (e.g., pseudonymization) across datasets used for AI models and systems?
    • Do we have robust, repeatable documentation to show “legitimate interest” (e.g., by explaining why AI model training or deployment is necessary, how the balance of interests favors individuals’ rights, and showing that sufficient safeguards exist)?
    • Are our technical and organizational safeguards (e.g., role-based access, logging, human review, opt-outs) consistently applied regarding special category data?
    • Did we update policies and privacy notices to include information on how data is used for AI development and operations?
  4. Monitor regulatory developments closely. Track emerging HRAIS-related standards and transparency codes relevant to your sector and AI use cases so you can map them into your framework early. These and other regulatory developments may continue to drive your organization’s need to maintain the above compliance capabilities, for traceability and defensibility.

EU’s Digital Omnibus offers AI regulatory relief, but questions remain

Follow us