Techgarage

Should we ban public face recognition?

by Kevin Kyburz 02/12/2019

San Francisco’s Board of Supervisors has proposed a regulation banning the governmental use of face recognition. Techgarage has gathered together the arguments for and against facial recognition in public spaces and has arrived at an unambiguous answer to the question of whether we should ban the new technology.

Where face scanning is already happening

There are various uses for face recognition technology. For instance, Apple’s latest iPhone uses it to unlock the phone and authorize payments.

Advertisement

Face recognition is also used in public spaces to track down criminals. In Moscow, thousands of cameras are already being employed to match the faces of passersby with individuals wanted by the police. In the UK as well, the police compare their own watch lists with photos taken by face recognition cameras.

Zurich Airport is likewise using cameras to identify faces, though only for passport control purposes. In Germany, the Hamburg police employed automated face recognition along with other means to search for individuals captured on video at the riots during the G20 Summit. Both Hamburg’s Data Protection Commissioner and the German Federal Commissioner for Data Protection consider “biometric evaluation of video material” unlawful since no legal basis exists for it yet.

Face recognition can also be used for analysis. The Chinese have supposedly been experimenting with a combination of artificial intelligence and face recognition cameras in public spaces to predict whether an individual will adhere to the rules or not. China is moreover believed to have used face recognition to seek out political activists.

Advertisement

Face recognition: the pros

An argument for the use of face recognition technology in public is that it serves the citizenry by facilitating the work of law enforcement agencies. It can also be persuasive during times of terrorism and organized crime. In addition, according to Australian agencies that are promoting the new technology, it has the potential to improve driving safety by helping to resolve traffic offenses.

A recent article in the scientific journal MIT Technology Review cites additional arguments in favor of facial recognition. Occasional arguments along those lines are that technology works more reliably than human beings. Airport security staff might get tired, while a computer would not. Another advantage of technology is its impartiality: with the aid of artificial intelligence, a face recognition system would be better at determining suspicious behavior than the police.

Face recognition: the cons

The NGO Human Rights Law Centre warns us that studies show facial recognition technology to increase the risk of discrimination against ethnic groups. What emerged from the studies was that facial recognition systems showed a preference for whichever ethnic group was dominant in the area where the particular system was developed. By contrast, the number of error messages in relation to ethnic minorities was disproportionately large. Since the system’s ability to verify identity was flawed, these minorities would be examined and monitored more frequently by the police. Similarly, some groups could be prevented from pursuing employment because of inaccurate capture by facial recognition systems. The same would be true of gaining access to credit, insurance, and other benefits.

Advertisement

In 2018, the Gender Shades study, conducted by the MIT Media Lab under researcher Joy Buolamwini, found that IBM’s facial recognition system had a 34.4% higher error rate for dark-skinned women than for fair-skinned men. Microsoft’s system had the best performance with a 20.8% difference, while the facial recognition technology Face++ showed a difference of 33.7%. In addition, the ACLU (American Civil Liberties Union) concluded that the system developed by Amazon misidentifies people of color more often than white people.

The problem, according to MIT Technology Review, is that facial recognition and analysis systems are not trained with representative datasets. The material employed contains far fewer photos of women and dark-skinned individuals than it does pictures of men and fair-skinned individuals. Such inequalities in face recognition and analysis systems would support existing societal inequities, even exacerbating them with prolonged use.

Meanwhile, there have indeed been improvements to the systems in question, as demonstrated this year through an investigation conducted by Buolamwini’s team. However, two recently incorporated systems, one by Amazon and one by Kairos, performed just as poorly as the other three systems reviewed in 2018. Presumably, Amazon and Kairos will use the critical research results to revise their systems and rid them of discrimination.

But even the fairest, most accurate face recognition software could be used to infringe upon civil liberties, as MIT Technology Review put it. One example put forth was Amazon’s attempt to sell its face recognition technology to the US security agency ICE (Immigration and Customs Enforcement) for tracking down illegal migrants in public places, as reported by the news site The Daily Beast. The problem? The ICE has been known to monitor medical facilities for the purpose of convicting illegals. If these facilities had such cameras permanently running out front, with the images being fed into facial recognition systems – whether by Amazon or some other provider – the affected individuals would have no means of receiving medical treatment in those locations.

In another example of potential infringement upon civil liberties through face recognition technology, MIT Technology Review cited an investigation by the media organization The Intercept. According to The Intercept, IBM had access to footage from the New York City Police Department’s surveillance cameras and used that to develop technology that could determine ethnicity from facial images. IBM’s technology was then tested through public surveillance cameras without informing New York City residents of the test.

Cynthia Wong, a representative of the human rights organization Human Rights Watch, argues that police surveillance of public spaces jeopardizes freedom of expression and assembly. Imagine the police feeding photos of a demonstration into the system to obtain information on the participants. That sort of thing would act as a deterrent to individuals wishing to express their opinion through public protest.

Human Rights Watch further reported last year that Chinese use of software-aided predictions based on data taken from surveillance cameras as well as medical and bank records has already led to the arrest of suspects.

In its 2018 annual report, the AI Now Institute at New York University warned us about affect detection. This subcategory of facial recognition can apparently discern an individual’s personality, feelings, mental health, and the degree to which he or she contributes to the work place. But with insufficient research to support this branch of face recognition technology, applying it to insurance, employment, training, or police work still carries too high a risk.

Conclusion

Using artificial intelligence in conjunction with facial recognition to make behavioral predictions is highly problematic. People would feel uneasy about how to act in public, and trying to be inconspicuous could in itself raise suspicions. According to Cynthia Wong of Human Rights Watch, the nation must prevent the use of technology in a way that suspects every citizen of being a potential criminal.

Google itself has stopped selling its face recognition technology until the company can be certain of the impossibility of abusing the technology. As Joy Buolamwini states in MIT Technology Review, a safe technology requires “algorithmic justice.” The contention is that without algorithmic justice, AI tools could be used as weapons.

Even where face recognition technology is employed solely for matching photos, with no attempt at behavioral prediction, given its occasional high error rate, we should still ban it to prevent discrimination against some groups. Only once we’re certain of a near-zero error rate will this technology be ready for use.

Kevin Kyburz

Kevin Kyburz

Kevin Kyburz is part of Generation Y, which grew up with a Windows 95 computer and the first PlayStation. Since he discovered the Internet, there are no limits.

Comment Form

Required fields are marked *

Subscribe to our Newsletter and get the next News first hand.

DJI Osmo Pocket,  Große Momente beginnen klein. 359 €, Jetzt bestellen!
Advertisement