Tech & Rights

​AI Regulation: Present Situation And Future Possibilities

Governments and companies use artificial intelligence to take decisions that can have a significant impact on our lives. AI must be regulated to protect ourselves, and to use technology without manipulation or bias. Here's how it should be done.

by Jascha Galaski

Why is AI an emerging issue in the world?

The artificial intelligence (AI) industry is growing at an incredible speed. Nations around the world are competing to win the “AI race”. Russian President Vladimir Putin believes that the nation that will come out on top will be “the ruler of the world”. Companies are investing billions of dollars to secure the largest market share. Simulations show that by 2030 about 70 percent of companies will have adopted some sort of AI technology. The reason is simple. Whether modelling climate change, selecting job candidates or predicting if someone will commit a crime, AI can replace humans and make more decisions quicker and cheaper.

Yet, AI systems are threatening our fundamental rights. For example, algorithms that moderate content on social media platforms can restrict free speech in an unfair manner and influence public debate. Biometric mass surveillance technologies violate our right to privacy and discourage democratic participation. Algorithms rely on massive sets of personal data, the collection, processing and storage of which frequently violates our data protection rights. Algorithmic bias can perpetuate existing structures of inequality in our societies and lead to discrimination and alienation of minorities. This is exemplified by hiring algorithms, which are likely to prefer men over women and white people over black people because the data it is fed with tells them that ‘successful candidates’ are often white men.

Donate to liberties

Together, we are stronger

By supporting Liberties, you make it possible for our team of human rights defenders and campaigners to respond quickly when our liberties come under threat.

Your help makes all the difference

Our human rights are most vulnerable when no one is paying attention. As an independent watchdog, Liberties makes lots of noise when our human rights come under attack. But we can’t do it alone - it is only thanks to your support that our work is possible. Help make our voice stronger by donating today. Every contribution counts.

These challenges are exacerbated by the fact that AI is so complex. We still do not have a good understanding of the possible risks AI systems can pose to our societies. Jenna Burrell, a researcher from the University of California, has distinguished between three types of AI system opacity. Those that are intentionally kept opaque, because businesses or states want to keep secrets. Those that result from technical illiteracy, because they are too complicated to be understood by the general public. And those that arise from the complex characteristics of machine learning algorithms. In other words, those that even the programmers do not really grasp.

To prevent and protect us from these threats, AI must be regulated. Legislators across the globe have to this day failed to design laws that specifically regulate the use of AI. This allows profit-oriented companies to develop systems that may cause harm to individuals. Some of these systems already exist and are being used. Because of authorities’ lack of transparency, we often just don’t know about it. Police forces across the EU deploy facial recognition technologies and predictive policing systems. As we explain in another article, these systems are inevitably biased and thus perpetuate discrimination and inequality.

In this article we will discuss why we need AI regulation, which sort of AI regulation already exists, what an AI regulation should contain and what the future of AI regulation depends on.

Why is AI regulation necessary?

We need to regulate AI for two reasons. First, because governments and companies use AI to take decisions that can have a significant impact on our lives. For example, algorithms that calculate school performance can have a devastating effect. In the UK, the Secretary of State for Education used an algorithm to determine the final exam grade of students across the country. The result: almost 40 percent of students received lower grades than grades previously issued by their teachers. In addition, the algorithm was not only inaccurate, but it also favoured students in private schools over those in public ones. AI has also shown its limitations in the private sector. In one case, a credit card introduced by tech giant Apple offered lower credit limits for women than for men. AI systems that calculate the likelihood of recidivism and determine length of prison sentences of defendants can also significantly alter a person’s life. Without proper rules, the systems are more likely to be inaccurate and biased as companies have less incentive to invest in safety measures and assure the quality and unbiased nature of its data.

Second, because whenever someone takes a decision that affects us, they have to be accountable to us. Human rights law sets out minimum standards of treatment that everyone can expect. It gives everyone the right to a remedy where those standards are not met, and you suffer harm. Governments are supposed to make sure that those standards are upheld and that anyone who breaks those standards is held accountable - usually through administrative, civil or criminal law. That means everyone, including corporations and governments, have to follow certain rules when they take decisions. When people don’t follow agreed standards and this ends up harming someone, the perpetrator has to answer for it. But there are already signs that the companies behind AI may escape responsibility for problems they cause. For example, when in 2018 an Uber self-driving car killed a pedestrian it was at first not clear who would be responsible. Was it the car manufacturer, Uber or the person in the car? Although investigators found that the car had safety issues (it did not consider jaywalking pedestrians), Uber was found “not criminally liable”. Instead, it was the person behind the wheel who was charged with negligent homicide, as she was streaming an episode of a television show.

What do we know about regional and national regulations at present?

As previously mentioned, there is currently no legislation specifically designed to regulate the use of AI. Rather, AI systems are regulated by other existing regulations. These include data protection, consumer protection and market competition laws. Bills have also been passed to regulate certain specific AI systems. In New York, companies may soon have to disclose when they use algorithms to choose their employees. Several cities in the US have already banned the use of facial recognition technologies. In the EU, the planned Digital Services Act will have a significant impact on online platforms’ use of algorithms that rank and moderate online content, predict our personal preferences and ultimately decide what we read and watch – also called content-moderation algorithms.

National and local governments have started adopting strategies and working on new laws for a number of years, but no legislation has been passed yet. China for example has developed in 2017 a strategy to become the world’s leader in AI in 2030. In the US, the White House issued ten principles for the regulation of AI. They include the promotion of “reliable, robust and trustworthy AI applications”, public participation and scientific integrity. International bodies that give advice to governments, such as the OECD or the World Economic Forum, have developed ethical guidelines. The Council of Europe created a Committee dedicated to help develop a legal framework on AI.

However, the most ambitious proposal yet comes from the EU. On 21 April 2021, the EU Commission put forward a proposal for a new AI Act. The draft suggests making it illegal to use AI for certain purposes considered “unacceptable”. These include facial recognition technologies, AI systems used for social scoring which rank people based on their trustworthiness, and systems that manipulate people or exploit vulnerabilities of specific groups – for example a toy that uses voice assistance to manipulate children to do something dangerous. The proposal takes a risk-based approach: the bigger the risk that a certain use of AI creates for our freedoms, the more obligations on the authority or company to be transparent about how the algorithm works and report to regulators on how it’s been used. While this sounds like the European Commission is serious about regulating harmful AI systems, the proposal is in reality putting business ahead of fundamental rights. The Commission likes to claim that it has prohibited facial recognition technology, but the proposal offers loopholes that allow corporations and authorities to use it. Further, the transparency obligations for high risk systems have a significant flaw: The job of checking whether AI is risky is left to the businesses that create the AI systems themselves. As profit-oriented businesses have an interest to see their products on the market, they are likely to downplay the risks.

What should AI regulation contain?

An effective, rights-protecting AI regulation must, at a minimum, contain the following safeguards. First, it must prohibit technologies that violate our fundamental rights, such as biometric mass surveillance or predictive policing systems. The prohibition should not contain exceptions that allow corporations or public authorities to use them “under certain conditions”.

Second, there must be clear rules setting out exactly what companies have to make public about their products. Companies must provide a detailed description of the AI system itself. This includes information on the data it uses, the development process, the systems’ purpose and where and by whom it is used. It is also key that individuals exposed to AI are informed about it, for example in the case of hiring algorithms. Systems that can have a significant impact on people’s lives should face extra scrutiny and feature in a publicly accessible database. This would make it easier for researchers and journalists to make sure companies and governments are protecting our freedoms properly.

Third, individuals and organisations protecting consumers need to be able to hold governments and corporations responsible when there are problems. Existing rules on accountability must be adapted to recognise that decisions are made by an algorithm and not by the user. This could mean putting the company that developed the algorithm under an obligation to check the data with which algorithms are trained and the decisions algorithms make so they can correct problems.

Fourth, new regulations must make sure that there is a regulator to check that companies and the authorities are following the rules properly. This watchdog should be independent and have the resources and powers it needs to do its job.

Finally, an AI regulation should also contain safeguards to protect the most vulnerable. It should set up a system that allows people who have been harmed by AI systems to make a complaint and get compensation. And workers should have the right to take action against invasive AI systems used by their employer without fear of retaliation.

What does the future of AI regulation depend on?

When the EU creates rules on AI, it will probably end up setting the standard for the rest of the world because of all the companies that work in and are based in the EU. The EU has a big responsibility to get it right, because these rules will affect how AI systems are used in less democratic parts of the world. For example, algorithms that claim to predict a person’s sexual orientation may lead to people dying in countries where being gay is still legally punishable by death.

It now comes down to policymakers and EU leaders to develop rules that will enhance our quality of life and promote equality. EU negotiators may be tempted to embrace AI because they think it can deliver savings or because it will stimulate the economy. But taking shortcuts in public services or using AI where it has no social benefit will end up damaging our way of life and the freedoms we value. The question the EU needs to be asking itself is how our societies can use AI to bring our rights and freedoms to life.


Liberties about AI in the media

Reuters: Money, mimicry and mind control: Big Tech slams ethics brakes on AI

France24: EU unveils AI rules to temper Big Brother fears

Donate to liberties

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

We’re grateful to all our supporters

Your contributions help us in the following ways

► Liberties remains independent
► It provides a stable income, enabling us to plan long-term
► We decide our mission, so we can focus on the causes that matter
► It makes us stronger and more impactful

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

Subscribe to stay in

the loop

Why should I?

You will get the latest reports before everyone else!

You can follow what we are doing for your right!

You will know about our achivements!

Show me a sample!