The recent killing of George Floyd by U.S. police appears to have been the catalyst for a backlash by tech companies such as Amazon and Microsoft who are banning the police from using facial recognition software until more regulations are in place.

Problems

Whilst facial recognition technology has benefits in terms of its possible impact in quickly identifying the perpetrators of crimes and as a source of evidence, privacy organisations argue that facial recognition technology (FRT) systems infringe privacy rights.  Also, the deployment of the technology is thought by many to be too far ahead of the introduction of regulations to control its use, and there is evidence that systems still contain flaws and bias that could lead to wrongful arrest. 

For example, in the UK:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals led to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by Big Brother Watch (a privacy campaign group) and signed by more than 18 politicians, 25 campaign groups, and numerous academics and barristers, highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– A recently published letter by London Assembly members Caroline Pidgeon MBE AM and Sian Berry AM to Metropolitan Police commissioner Cressida Dick asked whether the FRT technology could be withdrawn during the COVID-19 pandemic on the grounds that it has been shown to be generally inaccurate, and it still raises questions about civil liberties. The letter also highlighted concerns about the general inaccuracy of FRT and the example of first two deployments of LFR this year, where more than 13,000 faces were scanned,  only six individuals were stopped, and five of those six were misidentified and incorrectly stopped by the police. Also, of the eight people who created a ‘system alert’, seven were incorrectly identified. Concerns have also been raised about how the already questionable accuracy of FRT could be challenged further by people wearing face masks to curb the spread of COVID-19.

In the EU:

Back in January, the European Commission considered a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

In the U.S.

In 2018, a report by the American Civil Liberties Union (ACLU) found that Amazon’s Rekognition software was racially biased after a trial in which it misidentified 28 black members of Congress.

In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (NIST) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African-American and Asian faces, and was particularly prone to misidentifying African-American females.

Historic Worries by Tech Companies

Even though big tech companies supply facial recognition software such as Amazon (Rekognition), Microsoft and IBM, some have not sold it to police departments pending regulation, but most have also had their own concerns for some years.  For example, back in 2018, Microsoft said on its blog that “Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses”.

Temporary Bans and Distancing

The concerns about issues such as racial bias, mistaken identification, and how police may use FRT in an environment that may not be sufficiently regulated have been brought to a head with the killing of George Floyd and the protests and media coverage that followed.

With big tech companies keen to maintain an ethical and socially responsible public profile, follow-up on their previous concerns about problems with FRT systems and a lack of regulation,  and to distance themselves from the behaviour of police as regards racism/racial profiling or any connection to it e.g. by supplying FRT software, four big tech companies have announced the following:

– Amazon has announced that it is implementing a one-year moratorium on police use of its FRT in order to give Congress enough time to implement appropriate rules.  The company stressed that it had advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition, and that it even though it is banning police use of its FRT, it is still happy for organisations such as Thorn, the International Centre for Missing and Exploited Children, and Marinus Analytics to use the ‘Rekognition’ FRT to help rescue human trafficking victims and reunite missing children with their families.

– After praising progress being made in the recent passing of “landmark facial recognition legislation” by Washington Governor Jay Inslee, Microsoft has now announced that it will not sell its FRT to police departments until there is a federal law (grounded in human rights) to regulate its use. Microsoft has also publicly given its backing to a legislation in California stopping police body cameras from incorporating FRT.

– IBM’s CEO, Arvind Krishna, has sent a letter to the U.S. Congress with policy proposals to advance racial equality, and stating that IBM will no longer offer its general-purpose facial recognition or analysis software. The letter stated that IBM opposes and will not condone uses of facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with its “values and Principles of Trust and Transparency”. The company says that it is now a “time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

– Google has also distanced itself from FRT with Timnit Gebru, leader of Google’s ethical artificial intelligence team, commenting in the media about why she thinks that facial recognition is too dangerous to be used for law enforcement purposes at the current time.

Looking Forward

Clearly, big tech companies that have been at the forefront of new technologies that are still in their early stages of trials and deployment face a difficult public balancing act when the benefits of those technologies are overshadowed by the flaws or by how agencies who purchase it behave and use it.  Tech companies such as Google, Amazon, Microsoft, IBM, and others must protect their brands, their public values and need to reflect the views right-thinking people. The moves by these companies may push forward the introduction of regulations, which is likely to beneficial, and the hope among users of the services of these tech companies, as we are assured by the tech companies is the case, is that it is real ethical and social justice beliefs that are the key drivers in these announcements.

Once businesses start moving forwards again, they should expect a growing preference by customers to use contactless, digital, and online payments.

If you would like to discuss your technology requirements please:

Back to Tech News