Tech & Rights

Recommendations On The Commission’s AI Regulation Proposal

The European Commission’s draft for a brand new regulation on artificial intelligence contains some good initiatives, but it still falls short of protecting citizens’ fundamental rights. Here's what still needs to change.

by Jascha Galaski

On 21 April 2021, the Commission’s presented a draft for a brand new regulation on artificial intelligence (AI) – the Artificial Intelligence Act. The proposal contains some good initiatives, such as a prohibition of several harmful AI systems and strict transparency and security obligations for systems considered ‘high-risk’. However, the Commission’s proposal still falls short of protecting citizens’ fundamental rights.

Some prohibited systems, which – as the text itself says – are “contravening Union values” are not de facto prohibited. Several exceptions allow law enforcement agencies to use mass surveillance tools, such as facial recognition technologies, despite massive mobilization against their use. Other public authorities and private actors are exempt from the prohibition.

Other harmful technologies, such as systems used for predictive policing, biometric categorization, emotion recognition or systems that generate or manipulate content are severely under regulated. Businesses with high stakes in seeing their products make it to the market are allowed to self-regulate. Not enough thought is given to enforcement and the proposal fails to address power imbalances between providers of AI systems and consumers.

These and more points are addressed in this policy paper that serves to feed into the Commission’s public consultation of the AI Regulation proposal.

To protect the fundamental rights of EU citizens, including the right to privacy, the right to freedom of expression, the right to assembly and the right to non-discrimination, Liberties recommends the following:

1. Prohibit biometric mass surveillance systems and other AI systems listed in Article 5 without exceptions.

Under no circumstances should law enforcement be able to use facial recognition technologies and other tools of mass surveillance. The prohibition should be extended to include ‘post’ remote biometric identification systems and apply to all public authorities and private actors. The scope of the prohibition of social scoring systems must be extended so that it also applies to private actors.

2. All high-risk systems should be subject to a third-party conformity assessment.

The proposal suggests that providers of high-risk AI systems listed in Annex III should conduct a self-assessment. Liberties considers that this is not enough. These systems threaten to undermine a number of fundamental rights. Delegating risk-assessment to profit-oriented businesses is unacceptable. Liberties recommends that all high-risk systems be subject to a mandatory third-party risk assessment by an independent oversight body.

3. Prohibit predictive policing practices.

Evidence has shown that predictive policing technologies systematically discriminate against minority groups, perpetuate biases, are ineffective and inaccurate. Liberties recommends an outright prohibition of predictive policing systems.

Help us fight for your rights. Donate
4. Prohibit, with certain exceptions, emotion recognition technology, biometric categorization systems and systems used to manipulate content.

-Liberties recommends prohibiting emotion recognition technologies used for important decisions that directly affect a person’s life chances and access to opportunities.

-Biometric categorization systems that group people according to their gender, ethnic origin, sexual or political orientation should be banned outright.

-AI systems that generate or manipulate image, audio or video content, such as deep fakes, cause substantial harm to individuals’ lives and democratic processes. Liberties recommends moving them in the high-risk category.

5. Extra scrutiny for public authorities.

Decisions made by public authorities can have a significant impact on our lives and unlike with the private sector, people do not have the choice to opt-out of using public services. Thus, the public sector requires higher levels of transparency and accountability. Liberties recommends that all AI systems used by public authorities, regardless of the risk level, be included in the EU database.

6. Stronger enforcement and more opportunities for remedies.

To ensure proper enforcement of the regulation, Liberties recommends giving more autonomy to the EU Artificial Intelligence Board and to designate national DPAs as the national competent authorities. This would require allocating more financial and human resources to national DPAs. In addition, the proposal should include more clarity on the possibilities of collective redress for persons adversely affected by AI systems.

Donate to liberties

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

We’re grateful to all our supporters

Your contributions help us in the following ways

► Liberties remains independent
► It provides a stable income, enabling us to plan long-term
► We decide our mission, so we can focus on the causes that matter
► It makes us stronger and more impactful

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

Subscribe to stay in

the loop

Why should I?

You will get the latest reports before everyone else!

You can follow what we are doing for your right!

You will know about our achivements!

Show me a sample!