For July’s edition of Democracy Drinks, Liberties invited Pegah Maham, project director on artificial intelligence at Stiftung Neue Verantwortung, to join Orsolya Reich, senior advocacy officer at Liberties, to discuss the rise and regulation of AI in Europe and beyond. Much of the discussion centered around the AI Act, the draft EU law to regulate AI and protect fundamental rights.
The “self-certification” question
To begin, Pegah discussed how the AI Act is not yet one piece of legislation – rather, the three main bodies of the EU law-making process, the Commission, the Council and the Parliament, have each offered their own version of the law. From a human rights perspective, the Parliament’s draft is the most promising. Among other things, Parliament has included a ban on real-time biometric surveillance systems – think facial recognition technology – in public places, and bans the use of AI in predictive policing, which has long been disparaged for being discriminatory.
Pegah also said that the EU bodies seem to agree on a “self-certification” process for AI that is deemed “high risk” to fundamental rights. She opined that this seems to give way too much power to companies who have a financial interest to downplay the risks. Moreover, the role of other stakeholders and civil society in the process of monitoring AI development and use is not well established in the drafts of the law.
Shifting the approach
Compounding our concerns is the fact that all three versions – and thus whatever final legislation we get – share a hugely disappointing feature: the EU has decided to take a risk-based approach to regulating AI, rather than a rights-based approach. AI systems have been categorized into risk groups, and only those which are deemed “high risk” will need to go through the aforementioned certification process and be subject to any degree of oversight.
Orsolya said that although the risk-based approach is disappointing, it is at least a regulatory approach that is supposed to set enforceable and specific duties (even though some stakeholders are currently trying to water them down to the point that they may not be very different to self-regulation). One of the most important features of the EU’s AI legislation is a shift in its regulatory approach. For years, the narrative has centered on trustworthy AI and self-regulatory ethical guidelines for tech companies. European legislators appear to now agree, however, that self-regulation is not sufficient and have shifted the discourse from abstract AI ethics to more concrete human rights considerations.
Like a box of chocolates?
In truth, we don’t know what we’re going to get when it comes to the AI Act. The three legislative bodies of the EU are now sitting down for the trialogues – secretive final meetings where the three drafts will be reconciled and a single piece of legislation will be agreed and become law.
Both Pegah and Orsolya remarked about the level of lobbying that tech companies are doing around this law, and how some – we’re looking at you, ChatGPT – are publicly saying one thing (“regulate us!”) while having their lobbyists quietly say something quite different to EU lawmakers (“don’t regulate us!”).
Liberties, Stiftung Neue Verantwortung and other rights groups will be eagerly watching to see what comes out of the trialogues– and we should know by the end of the year. Stay tuned to Liberties for all the latest news on the EU AI Act and other uses of artificial intelligence in Europe.
Previous Democracy Drinks events: