More than 60 civil society organizations call on European lawmakers to ensure that the AI Act is fully coherent with rule of law standards, including transparency, accountability, and access to justice. The human rights coalition also pushes for the rejection of recent amendments to the AI Act on blanket national security exemption and dangerous loopholes in the classification of AI systems as high-risk.
In an open letter sent to EU legislators, over 60 civil society organizations argue that as artificial intelligence becomes increasingly deployed by both the private and public sectors, the rule of law requires the EU to adopt robust safeguards within the AI Act to protect the very foundation our Union stands on. The misuse of AI systems, including opaque and unaccountable deployment of AI systems by public authorities, poses a serious threat to the rule of law, fundamental rights, and democracy.
The open letter was drafted and coordinated by the Civil Liberties Union for Europe (Liberties), European Civic Forum (ECF), European Center for Not-for-Profit Law (ECNL) and was signed by over 60 organizations including Amnesty International, Access Now, and EDRi.
Fundamental rights impact assessments are a must
Specifically, CSOs demand that fundamental rights impact assessments (FRIAs) should be “an obligation for all deployers of high-risk AI technologies” to ensure that their use upholds the principles of justice, accountability, and fairness. They call for rule of law standards to be added to the impact assessments, with a structured framework to evaluate the potential impacts, biases, and unintended consequences of AI deployment. As states are responsible for the proper implementation of the rule of law framework, that public authorities, including law enforcement, conduct FRIAs is not just a recommendation but a necessary safeguard to ensure that AI systems are designed and deployed in full accordance with the values of the EU and the EU Charter of Fundamental Rights.
No general exemption for national security or arbitrary loopholes for big tech
Signatories of the open letter also call on EU legislators to reject the European Council’s proposed amendment to Article 2, which aims to exclude AI systems developed or used for national security purposes from the scope of the Act. Furthermore, the campaigners urge lawmakers to return to the original Commission’s proposed version of the AI Act, thereby removing newly added loopholes that would give AI developers the power to unilaterally exempt themselves from the safeguards set out in the AI Act (Article 6(2)).
“In a growing number of countries, criminal justice systems use AI for automated decision-making processes to limit the burden and the time pressure on judges. But to ensure judicial independence, the right to a fair trial, and transparency, the AI used in justice systems must be subject to proper oversight and in line with the rule of law”, said Dr. Orsolya Reich, senior advocacy officer for Tech & Rights at the Civil Liberties Union For Europe. “We also urge legislators to reject the national security exemption in light of the Pegasus spyware scandal, in which journalists, human rights activists, and politicians were surveilled by their own governments. The case demonstrates the clear need to ensure that systems developed or used for national security purposes are not exempted from the scope of the AI Act”, she added.
Read the open letter here.
Read our op-ed on Euronews.