Facial recognition systems are biometric technologies that captures a person’s facial features with the aim of verifying their identity, or locating them amongst a group, a place, or in a database. The technology they use has changed how people can be identificated, making the process much more efficient. Applications such as Snapchat use it to apply face filters, it can be used to unlock your phone, and some airports are now testing facial recognition technology to identify passengers before boarding. But it can also be exploited with the aim to surveil; whether it be to track people in the streets, surveil protests or for business operations, the use of AI is increasing.
As a result, facial recognition technology has sparked many debates and controversies. In the draft Artificial Intelligence Act, published in April 2021, the European Commission proposed to restrict, to some extent, the public use of facial recognition technology and the European Parliament called for a ban on the technology in October 2021. The European Data Protection Supervisor and the European Data Protection Board released a Joint Opinion in 2021 on the AI Act in which they call for a ban on remote biometric surveillance in public places. Human rights organisations have challenged the use of this technology in public spaces and regularly warn of its dangers. Many are campaigning to ban it.
Why is there so much concern? What can we do about it? Let’s take an in-depth look at the main concerns.
What are the biggest privacy concerns stemming from facial recognition technology?
1. Improper data storage
Facial images are extremely easy to collect because you can be filmed by cameras in public places. The biggest problem resides in the fact that no security system is airtight. Imagine there is a database with your photo or your address. If the database gets hacked and a malicious person gets access to it, they could use it for identity theft, robbery or harassment purposes. For instance, stalkers could perform reverse image searches on pictures to gather personal information about an individual, such as their address. They could also use your image to impersonate you online and scam people and police could think you are the person behind it.
Biometric databases are widely used by police forces and governments, as well as popular websites and applications that store data on our faces or other biometric features such as our fingerprints. When Clearview AI, a notorious facial identification business that sells access to its database to, among other institutions, US law enforcement, US Senator Edward Markey stated: “If your password gets breached, you can change your password. If your credit card number gets breached, you can cancel your card. But you can’t change biometric information like your facial characteristics.”
2. Misuse of data
Survey results demonstrate that this technology, as all others, was not immune to judgmental bias. It recognises white men more readily than women or other ethnic groups. In 2018, 35% of facial recognition errors happen when identifying women of color, compared to 1% for white males. This bias was reduced through training over time, but it is still there and very difficult to eradicate completely.
These false positives can turn into false arrests. Nijeer Parks, for example, was arrested in 2019 in the USA for allegedly shoplifting and trying to hit an officer with a car. He had been misidentified and wrongfully arrested based on facial recognition. The case was dropped eventually but it took a year, including 10 days in jail and 5000$ spent on legal defense. In addition, the use of biometric information with the aim of classification on certain criteria could pave the way for profiling, resulting in discrimination and wrongful convictions against certain groups because of conscious or unconscious biases in the justice system.
Finally, certain regimes use this technology to target minority groups, as was the case with the Chinese government which used Huawei’s facial recognition software to identify Uighurs. According to an internal test report, the system used was allegedly capable of identifying amongst a crowd each person’s age, sex and ethnicity, and then trigger an alarm reporting any member of the Uighur community to the police. Huawei allegedly provided the servers, cameras and other tools.
3. Infringement on individual privacy
The way facial recognition technology is used is inherently invasive and intrusive. As it is done today, data collection is realized without the consent or the knowledge of the individuals: for instance, you can be filmed just walking down the street.
Being recorded and monitored without one's consent and even knowledge is a clear infringement of one's individual privacy and freedom. It takes away the reassurance of being able to move and act freely without the fear of being constantly watched and surveilled. Being watched changes the way we behave and also affects our well being. Such a gaze may create a feeling of always being surveilled by people, which can lead to increased stress and decreased trust between the people and the government. If people fear their daily interactions and conversations are being monitored, they might avoid criticizing the government for fear of something happening to them or their loved ones.
Surveillance of people on the basis that they might do something illegal in the future is also an infringement of the presumption of innocence. Law enforcement could treat someone differently on the basis of a supposed future culpability, which cannot be proven.
4. Infringement on freedom of speech and association
Facial recognition technology is of concern because of its potential to become a biometric mass surveillance tool. Surveillance, especially in the case of demonstrations, muzzles freedom of expression and chills activities such as political activism. These tools are used to monitor the population and, in countries where criticizing the government is not tolerated, to the arrest of those who oppose the government. In the USA, the NGO Electronic Frontier Foundation exposed the fact that the San Francisco Police Department gained live access to over 400 cameras to spy on protestors during the 2020 protests. When certain technologies are in places, they can be, and often are, misused.
Similarly, the use of facial recognition technology reduces the right to anonymity. People expect some kind of anonymity, even in public, and do not expect to have their face linked to facts, actions or data about them available online. Not being anonymous at all times is all the more dangerous when technology makes it possible to recognise who people associate with, and for what reasons.
In a VICE article, Joshua Franco, senior research advisor and the deputy director of Amnesty Tech at Amnesty International stated: “The fear and uncertainty generated by surveillance inhibit activity more than any action by the police, [...] if you feel you’re being watched, you self-police, and this pushes people out of the public space.”
5. Lack of transparency
Transparency regarding data collection, management and deletion is still too low in practice in the EU; sanctions are often pronounced against companies or institutions for insufficient fulfillment of information obligations. The GDPR requires concise, transparent, comprehensible and easily accessible information to data subjects (Art. 12, 13, 14). For instance, in many cases concerning the use of facial recognition, it is impossible to identify data collectors and processors because you might not know you are being recorded.. And even if they are identified, the amount of data collected and the purpose for which it will be used may remain unknown. In addition, data subjects cannot retrieve, correct, control or delete such data if they are unaware it exists.
Transparency is important because it allows for better control of how your data is processed: you can make sure that companies, for instance, follow the law and respect your individual rights by verifying that information obligations are respected, or verifying that you can access your personal data. If it is not the case, you can then file a complaint with the authority control. Without transparency, companies can do what they want with your personal data without your knowledge, and people do not know who possesses their biometric data and for what purpose. People could be subjected to targeted ads, profiling etc. or their data could be sold to a third-party entity. As an example, the above mentioned Clearview AI harvested pictures of people on the internet without their consent and used it to build their database, which was then used by private companies. After two years of legal dispute, Clearview AI reached an agreement with the American Civil Liberties Union to permanently halt the sales of its biometric database to private companies and individuals in the United States.
6. It might become normalized
If facial recognition technology continues its expansion without limitation, then people could get accustomed to it and it could become the norm. The risk is that such measures will stay in place, and will be used more frequently as time passes. The civil rights space would then be increasingly shrinking if surveillance is institutionalized, for all the reasons cited previously.
7. New advances could make it accessible to everyone
In 2011, A. Acquisti and his team conducted multiple experiments to show the potential dangers of Facial Recognition Technology in the future. In the first one, they used this technology to match public Facebook profile pictures and dating-site profile pictures. They were able to identify many people, even when they used only their first name or a pseudonym on their dating profile. In another experiment, they took webcam pictures of students and matched them with their Facebook profile. They were able to identify about one third of them. This foreshadows a future in which we all may be recognizable by anyone with a smartphone; a stalker could find your personal information and use it to stalk you.
As the last century has shown us, the evolution of technology has become exponential. Being able to identify someone you see in the street could become the norm - In 2021, Police in the UK developed an app on mobile phones to identify wanted individuals in real time through a facial recognition app. On YouTube, videos have been made showing how you can create your own facial recognition system, and identify people in public using facial recognition technology.
What solutions are available?
Firstly, the collection, storage and sharing of biometric data should be entirely transparent by giving people the ability to access and control their data and the possibility to know who the data processor is. People should be able to give a clear, and above all informed, consent before their biometric data is included in a database and before anyone accesses this data.
Lack of regulation is one of the main problems here. Though increasingly more legislation is under development, the current legal framework is insufficient and unequal among countries to achieve effective control of this technology. Many countries do not include any legislation in this respect, and those who do are still relatively incomplete. In Europe we have a relatively strict regulation, the GDPR - but that suffers from insufficient enforcement. In addition, the GDPR does not apply to law enforcement.
The general public is increasingly more aware of issues related to data protection. However, facial recognition technology concerns are still little-known because of the fact their collection and use is “invisible” in our everyday lives. Users need to be more alert and educated about this matter in order to achieve societal changes.
This general awareness should be transcribed in companies, governments or any type of institution using facial recognition technology. Each of them should have dedicated security professionals on this matter, as well as dedicated security protocols and employee training. Organizations that host publicly available records should take proactive measures to prevent misuse, such as restricted access to sensitive databases.
Photocredit: Christopher Burns/Unsplash