Liberties welcomed the opportunity to respond to the European Commission’s call for evidence on the Digital Omnibus simplification package. As a civil society organisation dedicated to defending fundamental rights online, we used this chance to highlight the risks the simplification process poses to data protection, privacy, and the enforcement of the EU Artificial Intelligence Act (AI Act).
Data Protection and Privacy in the Spotlight
In our submission, we emphasized that The General Data Protection Regulation (GDPR) is the EU’s landmark law for protecting personal data and is widely recognised as the global gold standard for privacy. It gives people meaningful control over how their data is used and places strict obligations on organisations to handle personal information responsibly. Its key provisions, such as Article 5, which establishes core principles like fairness, transparency, and accountability; Article 24, which requires organisations to demonstrate compliance; Article 25, which embeds privacy protections “by design and by default” into all products and services; and Article 35, which obliges companies to carry out Data Protection Impact Assessments (DPIAs) for high-risk activities, form a strong and coherent framework that ensures people’s privacy is respected in practice, not just in theory.
Simplification must not be used as a pretext to weaken these safeguards. Proposals in the Digital Omnibus to limit record-keeping obligations only to very large companies with over 750 employees, or to ease rules on online tracking and cookie banners under Article 5 paragraph (3) of the e-Privacy Directive, risk undermining the very rights the GDPR and related laws were designed to protect.
Simplifying obligations for businesses might sound helpful, but it would leave individuals less able to control their data and more exposed to surveillance and tracking. Liberties aligns with the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) in calling for careful clarifications that preserve the GDPR’s core protections while improving clarity for businesses, not the other way around.
Artificial Intelligence: Safeguarding Standards for High-Risk Systems
The AI Act remains the EU’s most ambitious efforts to regulate artificial intelligence while safeguarding fundamental rights. In our response, we highlighted that the simplification process could put these protections at risk. Industry calls to delay, reduce, or simplify compliance requirements threaten to leave high-risk AI systems without proper oversight, potentially putting people’s privacy, safety, and rights in jeopardy.
Article 6, which governs high-risk AI systems, is particularly vulnerable. These systems affect essential human activities or safety and require rigorous assessment. Any simplification that broadens exemptions, narrows the scope of high-risk classification, or weakens fundamental rights impact assessments (FRIAs), which are required under Article 27 of the Act, could allow harmful AI systems to operate unchecked. Likewise, reporting obligations for serious incidents under Article 73 must remain strong so authorities can act early and hold providers accountable.
Liberties also emphasized the importance of transparency obligations under Article 50. Deployers of AI systems that generate or manipulate content must clearly disclose that the content is artificial. Simplifying or delaying these rules would reduce public trust and undermine informed consent, which is central to protecting fundamental rights in the digital sphere.
Simplification Must Serve the People, Not Corporations
The Digital Omnibus may claim to make laws more “accessible,” but simplification should never come at the expense of rights. Liberties argued that EU legislation must remain robust: weakening privacy rules or AI safeguards in the name of competitiveness benefits businesses, not people. Strong protections for privacy, accountability, and transparency are not obstacles, they are essential for trust, fairness, and innovation in the digital economy.
Finally, we wish to express our disappointment with the flawed process surrounding this call for evidence. With less than two months remaining before the Digital Omnibus proposal is scheduled to be published, decision-makers have little time to properly consider the arguments and warnings raised by civil society organisations, independent experts, and other stakeholders who have flagged significant threats to fundamental rights.
Such a short consultation period strongly suggests that this exercise risks becoming little more than a box-ticking formality, rather than a meaningful opportunity for public engagement. We therefore urge the European Commission to take the concerns raised by Liberties and other civil society actors seriously, and to ensure that the simplification process strengthens, rather than weakens, the protection of fundamental rights and EU values over purely commercial interests.
Looking Ahead
Liberties called on the European Commission to prioritize fundamental rights over corporate convenience, resist deregulatory pressure, and ensure that simplification genuinely clarifies rules without eroding protections. The Digital Omnibus presents an opportunity to make EU digital laws more understandable for everyone, but only if people’s rights, privacy, and safety remain non-negotiable.
Liberties’ full submission, including detailed recommendations on the GDPR, AI transparency, high-risk classifications, and impact assessments, is available here.