Say cheese and show those pearly whites – you’re on camera.
Berliners are in for an Orwellian surprise at the Südkreuz train station, where the government has been testing a new facial recognition surveillance system. It’s the latest instance of the technology’s use in Europe, and it’s already widely used around the world.
But despite its widespread use, facial recognition technology is operating in a legal gray area. As legal challenges arise against the use of mass surveillance, courts are increasingly siding with people’s rights and limiting the use of privacy-invading policing tactics like facial recognition surveillance.
The latest such decision happened just this month, when the European Court of Human Rights ruled that a mass surveillance system in the UK violates fundamental rights.
It’s not new
Facial recognition surveillance uses high-definition cameras to capture the faces of passers-by in public areas, then takes these images and compares them against images in law enforcement databases – mug shots, for example. It then adds the new images into the database, whether or not the person matched any existing image.
Facial recognition actually dates back to the 1960s. One of the earliest systems, developed by Bell Labs in the United States, relied on features like ear protrusion and nose length as the basis to recognize faces using pattern techniques. The first automated system was rolled out in 1973, but it was remarkably unreliable: all a person had to do was put on a pair of glasses and the success rate went from 75 percent to less than 3 percent.
Newer systems are far more accurate, but how they function is increasingly less clear, as both governments and companies develop their own proprietary systems. Yet even if with a successful-match rate above 97 percent, which Facebook claims to have achieved, this would still result in an extraordinarily high number of false positives each day – think thousands, not hundreds – if the system is used to surveille busy public areas.
And remember, whenever there is a match, the police have to spend time and resources following up to see if, this time, it’s not actually a false positive (spoiler alert: it almost certainly is). The police are really bringing this on themselves, but we should care. Every minute they devout to tracking down false positives inherently makes each and every one of us less safe.
How well do current systems work?
The use of facial recognition surveillance is becoming commonplace in Europe. It was used during the 2017 Champions League final in Wales, where it produced 2,297 false positives – the system thought nearly 3,000 people were terrorist suspects, but they were instead just obnoxiously drunk football fans. Terrifying, sure, but not terrorists.
Also in the UK, police in South Wales were using facial recognition surveillance between May 2017 and March 2018. How’d it go? The system identified 2,685 people as suspected terrorists, but 2,451 were false positives. That’s a stunning failure rate.
But back to Berlin, where the system has been actively in use this year. During one isolated trial of the system in Südkreuz station, 300 volunteers submitted “wanted photos” and agreed to walk through the station wearing transponders. The system entirely missed several of the volunteers, and produced a false-positive rate of .3 percent. Sounds low? It’s not.
As Kerstin Demuth, a spokesperson for the data-protection group Digitalcourage, told DW, “Personally, I think it’s a catastrophic result.” And she’s right. Imagine a large transport hub that sees 100,000 people pass through it every day. That .3 percent rate translates into 300 false-positives each day.
And it gets worse. Inasmuch as the technology is used as a complement to current police practices – *cough* racial profiling *cough* – it’s particularly troublesome that facial recognition technology performs even worse on people with darker skin. But despite its failures, facial recognition surveillance is being rolled out by law enforcements across the world, and it’s only a matter of time before we’re in databases without our knowledge. In fact, a 2016 study by Georgetown Law School in the United States found that up to half of all American adults are already in facial recognition databases.
Why it matters to you
So why should you care? Why shouldn’t you care? Privacy is a fundamental right of every single person, and no one should ever have to sacrifice it without just cause or their consent. But facial recognition surveillance runs a freight train over our privacy and we’re never even told about it. And it’s probably not even lawful.
Security is important. It’s one of the reasons we formed communities in the first place. But general, suspicionless surveillance doesn’t make us safer. In fact, it makes us less safe. All those false positives force law enforcement bodies to divert valuable time and resources towards hunting down drunk Real Madrid fans in the side streets of Cardiff. And there are plenty of better alternatives to mass surveillance.
If you oppose the unchecked use of facial recognition, you’re not alone. Protests against it have already happened in many European cities – 30,000 people recently demonstrated against it in Munich – because people care about their privacy, especially when it’s compromised by such a shoddy system. Big Brother isn’t knocking at the door – he’s in the house and making himself comfortable.
To join others who stand against mass surveillance and the erosion of our privacy, share our video and ask your friends to do the same.