Police should be banned from using live facial recognition technology in all public spaces because they are breaking ethical standards and human rights laws, a study has concluded.
LFR involves linking cameras to databases containing photos of people. Images from the cameras can then be checked against those photos to see if they match.
British police have experimented with the technology, believing it can help combat crime and terrorism. But in some cases, courts have found against the way police have used LFR, and how they have dealt with infringements of the privacy rights of people walking in the streets where the technology has been used. There are also concerns about racial bias.
The report, from the Minderoo Centre for Technology and Democracy, at the University of Cambridge, says LFR should be banned from use in streets, airports and any public spaces – the very areas where police believe it would be most valuable.
The study examined three deployments of LFR, one by the Metropolitan police and two by South Wales police. Both forces told the Guardian they had made improvements and believed in the benefits of LFR.
The report author Evani Radiya-Dixit said: “We find that all three of these deployments fail to meet the minimum ethical and legal standards based on our research on police use of facial recognition.
“To protect human rights and improve accountability in how technology is used, we must ask what values we want to embed in technology and also move from high-level values and principles into practice.”
The report adds: “We have shown how police use of facial recognition fails to incorporate many of the known practices for the safe and ethical use of large-scale data systems. This problem moves well beyond the concern of bias in facial recognition algorithms.”
Inside UK law enforcement LFR is seen as potentially the next big crime-fighting innovation, on a par with the introduction of fingerprints. It potentially boosts the ability to locate an individual and track them.
Critics warn it could lead to abuses of human rights on a huge scale, including against rights such as protest and freedom of assembly.
Overseas and more authoritarian regimes, such as China, have used the technology as part of their suite of repressive tools.
The Met said the algorithm used had improved hugely in its accuracy with help from the National Physical Laboratory and input from the Defence Science and Technology Laboratory, with a false alerts rate of less than 0.08%
Pete Fussey, from the University of Essex, was hired by the Met to audit their previous LFR trials, and produced a critical report. The Met claimed a 70% success rate by 2020; Fussey said it was only 19%.
Fussey said: “Live facial recognition is a powerful and intrusive technology that has real implications for the rights of individuals.
“That the court of appeal explicitly stated in 2020 that South Wales police uses of this technology was ‘unlawful’ makes it difficult to argue this technology should be used.
“Current regulation and oversight structures do not have the scope to protect people’s rights from misuses of this technology.”
South Wales police said 61 arrests had been made through LFR and they had improved their system since losing in court “to ensure there is no risk of breaching equality requirements through bias or discrimination”.
The assistant chief constable Mark Travis said: “The whole aim of using facial recognition technology is to keep the public safe and assist us in identifying serious offenders in order to protect our communities from individuals who pose significant risks.