Tech & Rights

The Digital Omnibus: What it Means For AI Regulation

The new Digital Omnibus proposal puts hard-won transparency rules and vital safeguards in jeopardy.

by Eva Simon

The EU’s AI Act was considered a breakthrough because it set clear rules for the development, use, and monitoring of AI systems. It includes safeguards, requirements for a fundamental rights impact assessment, and transparency rules for high-risk AI systems, such as medical devices, border control or AI use in the justice system.

But in November, the European Commission released a new Digital Omnibus proposal, risking undoing many of those hard-won protections. It claims to streamline the law and make the European market more competitive. We have no idea who they are trying to fool: in reality, the new proposal would delay essential safeguards, weaken transparency rules, and give Big Tech exactly what it has been lobbying for. At the same time it opens the European market for less safe AI components, creating market distortion.

Political Pressure and Big Tech Lobbying

The Trump administration’s AI Action Plan pushed for removing what it called “red tape.” It directly pressured the EU to relax its digital regulations. At the same time, big tech companies, like Meta, Amazon, Apple and others, launched an aggressive lobbying campaign spending millions of euros arguing that the AI Act and its strong safeguards “threaten innovation” and would be "too expensive” to comply with. While these companies are highly profitable, the rules in the AI Act are anything but burdensome.

During the drafting process, the Commission even skipped the impact assessment, claiming it has no impact on fundamental rights, while directly lessening safeguards, such as the fundamental rights impact assessment. The Omnibus, however, accommodates industry and US government demands, at the expense of the fundamental rights of people in the EU.

Delaying High-Risk AI Protections: A Step Backwards

The Digital Omnibus is its plan to delay the rules governing high-risk AI systems. These aren’t casual apps or chatbots; these are medical AI for hospitals, hiring processes, welfare distribution, border control, and even court systems.

Delaying these protections means these systems can continue operating without adequate safeguards. We know about cases with terrible consequences, leaving people exposed to errors, bias, and discrimination for far longer. .The whole point of the Omnibus is to make laws easier to follow, but this proposal deadline mix-up, causing chaos in the market and leaving companies with more uncertainty, not less.

Weakening Transparency: A Big Win for Black-Box AI

One of the major problems with the proposal is that it eliminates a basic transparency requirement for high-risk systems. Under the present AI Act, if a company develops or provides a system that appears high-risk but decides it isn’t, they must publicly register that decision. This allows journalists, researchers, and civil society to investigate these companies. The Omnibus would delete this requirement entirely. It's like removing the requirement for food companies to list ingredients.

Deepfake Transparency Rolled Back

The proposal also delays the enforcement of rules requiring the labelling of AI-generated content, even though we know and see that deepfakes are becoming more convincing every month. Imagine an AI-generated video of a political candidate “admitting” a crime the weekend before an election, without a label that it’s AI. It could change the outcome of the election.

Delaying enforcement creates a grace period where malicious actors can spread disinformation. By the time the legal consequences take effect, the damage will already be irreversible.

Conclusion

If the Digital Omnibus passes, it would significantly weaken the EU’s position as a global leader in tech regulation, also in data protection and responsible AI. It would make it easier for companies to avoid scrutiny and erode protections for fundamental rights.

The AI Act was meant to protect people by creating safeguards against high-risk AI systems. The Commission is now proposing to preserve corporate convenience above all else. Lawmakers still have time to reject this rollback. They should. Because fair, transparent, and rights-respecting AI systems are the baseline for our future. They cannot compromise our fundamental rights for the sake of business and political interests.

Read our in-depth policy analysis here

Further resources

Digital Omnibus on Data Protection: From Global Gold Standard to Corporate Giveaway

Digital Omnibus: Quick Analysis 

Liberties' response on the Digital Omnibus simplification package 



Donate to liberties

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

We’re grateful to all our supporters

Your contributions help us in the following ways

► Liberties remains independent
► It provides a stable income, enabling us to plan long-term
► We decide our mission, so we can focus on the causes that matter
► It makes us stronger and more impactful

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

Subscribe to stay in

the loop

Why should I?

You will get the latest reports before anyone else!

You can follow what we are doing for your rights!

You will know about our achivements!

Show me a sample!