Liberties welcomes the opportunity to contribute to the Commission’s Targeted stakeholder consultation on the classification of AI systems as high-risk. As a civil society organisation committed to ensuring that artificial intelligence systems fully comply with fundamental rights, we believe these guidelines must clarify existing ambiguities, for example, by affirming that the high-risk classification does not undermine the prohibition of remote biometric identification (RBI) systems, emotion recognition technologies, and biometric categorisation tools as set forth in Article 5.
Hungary’s use of surveillance technology in focus
In our consultation, we highlighted our ongoing case against Hungarian authorities for their use of RBI systems against participants in public demonstrations, such as Pride. We believe that the manner in which these systems are or may be used would contravene the EU AI Act, and must be considered against the legal requirements of high-risk AI use as outlined in this legislation.
By definition, any real-time RBI system by police that is not prohibited would still be high-risk, and would also have to follow the additional controls required for police uses of RBI. We reiterate that these use cases still entail extremely severe limitations on the fundamental rights of all people in the public spaces. The exceptions to the in-principle prohibition, therefore, need to meet an extremely high threshold. Even in a situation such as an imminent, genuine and foreseeable threat of a terror attack, there must not be any permanent RBI infrastructure. Instead, the infrastructure must be temporary, clearly marked, and must meet all the criteria for authorisation, safeguards, limitations in geographic scope, etc. in order to meet requirements of strict necessity and proportionality. Any uses not meeting these strict criteria would still be prohibited.
The Commission also sought information on examples where the use of AI systems related to biometrics needs further clarification regarding the distinction from prohibited AI systems. The Hungarian FRT Act introduces a broad legal basis for the use of RBI without specifying any technical or procedural safeguards for police operations. This effectively permits a system of remote biometric identification that is prohibited under Article 5(1)(h) of the EU AI Act, which bans the use of real-time RBI systems in publicly accessible spaces for law enforcement purposes, unless certain strict exceptions apply and such use is authorised under national law with appropriate safeguards.
Although courts have not yet interpreted the AI Act, there are compelling reasons to conclude that Hungary’s FRT law violates Article 5(1)(h). Assuming it is undisputed that the system constitutes RBI in public spaces, two central questions must be addressed. Under Article 3(42) of the AI Act, a real-time RBI system is defined as one in which biometric data is captured, compared, and matched without significant delay, including cases with ‘limited short delays’. Recital 17 confirms this includes ‘near-live’ material, while the AI Act Prohibition Guidelines (para. 310) clarify that a use is real-time unless the delay is so significant that the individual has likely already left the scene. Section 12/A of Hungary’s FRT Act enables real-time identification by linking newly recorded material to the HIFS database, allowing police to identify individuals, such as protesters, within moments.
This clearly falls under the definition of real-time RBI and is therefore covered by Article 5(1)(h). Additionally, the Hungarian law undermines the purpose of the AI Act prohibition, as stated in Recital 32, which highlights the chilling effect of such surveillance on public participation and freedom of assembly. A system that allows authorities to identify people at demonstrations in real time significantly deters individuals from exercising their fundamental rights.
Guidelines must clarify high-risk classification
The guidelines must affirm that the high-risk classification does not undermine the prohibition of RBI systems, emotion recognition technologies, and biometric categorisation tools as set forth in Article 5. This clarification is essential to uphold the rights enshrined in the Charter of the European Union. EU data protection authorities have consistently underscored that facial recognition in law enforcement contexts must fully comply with the LED. This includes the need for a clear and explicit legal basis, a demonstration of necessity and proportionality, strict data minimisation, independent oversight, and prior completion of DPIAs. The guidelines must emphasise that any deployment of biometric AI by public authorities must remain strictly exceptional and meet the high threshold established by EU law, particularly respecting Charter Articles 7 and 8. This requires the guidelines to explicitly articulate how Article 5 exceptions under the AI Act align with broader legal obligations—including the need for judicial authorisation and democratic oversight under national laws implementing EU frameworks on police cooperation.
The guidelines must clarify that non-remote BI systems are inherently high-risk and must be regulated in line with Article 9 of the GDPR. RBI used for non-law enforcement purposes are prohibited under Article 5, and any system capable of operating in real-time or near real-time must fall within the scope of a full ban.
Liberties’ full submission, including input on other issues related to high-risk systems, is available here.