The Commission has just revealed the Digital Omnibus package, which is a blow to digital rights. It proposes to roll back cornerstone protections like the GDPR and AI Act — concessions seemingly made to appease Big Tech and geopolitical pressure. This would mark a serious step back for the EU’s global leadership in upholding fundamental rights online. Here is our first take on AI and data protection. More thoughts soon.
AI Act
High-risk delay
The Digital Omnibus would pause the enforcement of regulations for high-risk AI systems under Article 6(2), such as biometric identification and categorisation, judicial decision-making or scoring or ranking job applicants, for up to 15 months – to 2 December 2027. While this part of Article 6 was supposed to apply from August 2026, this delay of up to 15 months would significantly extend gaps in transparency and accountability.
This delay is the result of intense lobbying from Big Tech and governments, including the U.S., Denmark and Germany. Tech companies claim they need more time to meet the requirements, but by the time requirements were supposed to kick in (2026), they would have known about them for more than two years. The delay would mean another year that AI systems posing a high risk to fundamental rights are allowed to operate without necessary regulation.
No registration? No problem - and that’s the problem
A serious weakness of the AI Act is Article 6(3), which says that providers of some AI systems that appear to be high-risk can unilaterally decide that, in fact, their system doesn’t pose a significant risk to fundamental rights, so they’re no longer subject to high-risk requirements like fundamental rights impact assessments.
It’s a terrible loophole and it only made it into the law because CSOs fought tooth and nail for a bare minimum safeguard as a compromise: Article 6(4), which requires providers that make use of this loophole to register themselves in a publicly accessible database.
But even that was too much for Big Tech – and, it seems, for the Commission. The proposal would remove this registration requirement, so providers who decide on their own that their systems don't threaten fundamental rights, and so they don’t have to abide by high-risk safeguards, don’t even have to tell anyone anymore.
It’s utterly untransparent and there’s no good reason for it. It’s a total surrender on a basic, minimum fundamental rights protection. And it certainly isn’t simplification. It’s a quiet dismantling of accountability and a dangerous step backward for rights-based governance.
Say grace for deepfakes
The AI Act does a poor job of handling deepfakes. They’re currently classified as “limited risk” despite being one of the primary vehicles of disinformation, and thus a real threat to democracy. Article 50(2) of the AI Act does require that most AI-generated content, including deepfakes, be clearly labelled as being generated by AI, and there are fines for non-compliance. This is supposed to be in force from 2 August 2026.
But under the Omnibus proposal, providers of AI systems that generate “synthetic audio, image, video or text content” and that have been placed on the market before 2 August 2026 (the previous date of enforcement) will get a “grace period” until 2 February 2027 to take the necessary steps in order to comply with Article 50(2), meaning no fines for non-compliance until the
This “grace period” presses snooze on one of the AI Act’s primary safeguards against AI-driven manipulation, when the rule of law and trust in democracy need more, not less, protection.
GDPR
Personal data or not personal data? That is the question.
The Digital Omnibus re-defines what “personal data” is in Article 4 of the GDPR. Until now, personal data refers to different pieces of information that allow us to identify someone. With the suggested changes the Commission includes a subjective aspect of how “reasonably” someone can be identified — shifting from an objective to a more flexible standard.
If the company claims they cannot re-identify that person (even if others could), that information may not be considered personal data and would lose any protection it currently has under the GDPR as it won’t be applicable anymore. This opens the door for more data to fall outside EU data protection rules a clear win for companies and anyone eager to exploit personal information.
AI to the rescue abuse!
Large language models, AI applications such as ChatGPT, Deepseek, Gemini and any other that may come in the future will be able to abuse personal data under the already flexible term “legitimate interest” in Articles 9(2)(k) and 88c) that allows processing personal data, which becomes an even broader term.
Under the suggested changes what used to be an exception for low-risk situations (ie: the local sports club of which you are a member has a legitimate interest to use your personal data so you can access the club) will allow that companies can use our personal data (including intimate photos, private documents or chat history) without our permission. This is a huge win for companies thirsty for personal data to create new AI applications without needing to ask for permission from us.
More resources
Liberties’ Submission on the “Simplification – Digital Package and Omnibus” Call for Evidence