On 21 April 2021, the Commission’s presented a draft for a brand new regulation on artificial intelligence (AI) – the Artificial Intelligence Act. The proposal contains some good initiatives, such as a prohibition of several harmful AI systems and strict transparency and security obligations for systems considered ‘high-risk’. However, the Commission’s proposal still falls short of protecting citizens’ fundamental rights.
Some prohibited systems, which – as the text itself says – are “contravening Union values” are not de facto prohibited. Several exceptions allow law enforcement agencies to use mass surveillance tools, such as facial recognition technologies, despite massive mobilization against their use. Other public authorities and private actors are exempt from the prohibition.
Stay in the loop.
These and more points are addressed in this policy paper that serves to feed into the Commission’s public consultation of the AI Regulation proposal.
To protect the fundamental rights of EU citizens, including the right to privacy, the right to freedom of expression, the right to assembly and the right to non-discrimination, Liberties recommends the following:
1. Prohibit biometric mass surveillance systems and other AI systems listed in Article 5 without exceptions.
Under no circumstances should law enforcement be able to use facial recognition technologies and other tools of mass surveillance. The prohibition should be extended to include ‘post’ remote biometric identification systems and apply to all public authorities and private actors. The scope of the prohibition of social scoring systems must be extended so that it also applies to private actors.
2. All high-risk systems should be subject to a third-party conformity assessment.
The proposal suggests that providers of high-risk AI systems listed in Annex III should conduct a self-assessment. Liberties considers that this is not enough. These systems threaten to undermine a number of fundamental rights. Delegating risk-assessment to profit-oriented businesses is unacceptable. Liberties recommends that all high-risk systems be subject to a mandatory third-party risk assessment by an independent oversight body.
3. Prohibit predictive policing practices.
Evidence has shown that predictive policing technologies systematically discriminate against minority groups, perpetuate biases, are ineffective and inaccurate. Liberties recommends an outright prohibition of predictive policing systems.
Help us fight for your rights!
4. Prohibit, with certain exceptions, emotion recognition technology, biometric categorization systems and systems used to manipulate content.
-Liberties recommends prohibiting emotion recognition technologies used for important decisions that directly affect a person’s life chances and access to opportunities.
-Biometric categorization systems that group people according to their gender, ethnic origin, sexual or political orientation should be banned outright.
-AI systems that generate or manipulate image, audio or video content, such as deep fakes, cause substantial harm to individuals’ lives and democratic processes. Liberties recommends moving them in the high-risk category.
5. Extra scrutiny for public authorities.
Decisions made by public authorities can have a significant impact on our lives and unlike with the private sector, people do not have the choice to opt-out of using public services. Thus, the public sector requires higher levels of transparency and accountability. Liberties recommends that all AI systems used by public authorities, regardless of the risk level, be included in the EU database.
6. Stronger enforcement and more opportunities for remedies.
To ensure proper enforcement of the regulation, Liberties recommends giving more autonomy to the EU Artificial Intelligence Board and to designate national DPAs as the national competent authorities. This would require allocating more financial and human resources to national DPAs. In addition, the proposal should include more clarity on the possibilities of collective redress for persons adversely affected by AI systems.