In April 2021, the European Commission (EC) released its much-awaited Artificial Intelligence Act, a comprehensive regulatory proposal that classifies AI applications under distinct categories of risks. Among the identified high-risk applications, remote biometric systems, which include facial recognition technology (FRT), were singled out as particularly concerning. Their deployment, specifically in the field of law enforcement, may lead to human rights abuses in the absence of robust governance mechanisms.
Law enforcement and facial recognition technology
Across jurisdictions, policymakers are increasingly aware of both the opportunities and risks associated with law enforcement’s use of FRT. Here facial recognition refers to the process of the (possible) recognition of a person by comparing a probe image (photos or movies/stills of suspects or persons of interest) to facial images of criminals and missing persons stored in one or multiple reference databases to advance a police investigation.
On one hand, FRT has the potential to help resolve, stop and prevent crimes and bring offenders to justice. More specifically, it could be useful for various types of investigations, including finding the identity of an ATM fraud criminal, looking for a terrorist in public spaces, fighting child abuse or even finding missing persons. On the other hand, early experience shows that without proper oversight, FRT could result in abuses of human rights and harm citizens.
In this context, striking the right balance appears difficult. Policymakers may explore various options ranging from an outright ban to the introduction of additional accountability mechanisms to limit the risk of wrongful arrests. In the US, cities such as San Francisco, Oakland and Boston have banned the use of FRT by public agencies, while the states of Washington, Virginia and Massachusetts have introduced legislation to regulate its use. In other regions, court decisions play an important role in shaping the policy agenda. The UK Court of Appeal ruled unlawful the deployment of FRT by the South Wales Police to identify wanted persons at certain events and public locations where crime was considered likely to occur.
At a more global level, the United Nations Office of the High Commissioner for Human Rights’ (OHCHR) recent report on the right to privacy in the digital age recommends governments halt the use of remote biometric recognition in public spaces in real-time until they can show there are no significant issues with accuracy or discriminatory effects. It also suggests that these AI systems must comply with robust privacy and data protection standards.
Facial recognition technology requires a robust governing structure
Despite these important developments, most governments around the world recognize the potential of facial recognition systems for national safety and security but are still grappling with the challenges of regulating FRT because crucial considerations have been largely overlooked. If we were to authorize the proportional use of FRT for legitimate policing aims, what oversight body should be in charge of assessing the compliance of law enforcement activities with human rights and following potential complaints from citizens? How might we maintain a high level of performance of the FRT solutions deployed? What procurement processes should be in place for law enforcement agencies?
To address these challenges, the World Economic Forum – in partnership with the International Criminal Police Organization (INTERPOL), the Centre for Artificial Intelligence and Robotics of the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the Netherlands police – has released a white paper that introduces a governance framework structured around two critical components:
- A set of principles for action that defines what constitutes responsible use of facial recognition for law enforcement investigations by covering all relevant policy considerations;
- A self-assessment questionnaire that details the requirements that law enforcement agencies must respect to ensure compliance with the principles for action.
As such, this initiative represents the most comprehensive policy response to the risks associated with FRT for law enforcement investigations, led by a global and multistakeholder community.
All well and good but since law enforcement operates under the guidelines of politicians, given the corruption in government we see around the globe today I am skeptical the technology will be used correctly. Irritate the wrong politician and wind up wrongfully incarcerated or worse, dead. Sorry, but it had to be articulated.
It requires a “Robust Governing Structure” eh? .. the type of government which doesn’t collude to steal elections?; the type of government that doesn’t wreck the economy and mandate a poisonous jab in order to enslave its citizens and make huge amounts of money?; a government which doesn’t work in conjunction with China and the CDC to fund gain of function experimentation which brought us to this point? Not THAT sort of government, obviously. Sadly that’s the type of government which America has, and is the very last institution you’d trust to safeguard individual rights and freedoms. Schwab is living… Read more »
As someone once said, there is no reality where something like this won’t be terribly abused
Total misuse of technology and abuse of basic human rights. wef needs to be demolished immediately by any means possible.
Organizations like this seriously need to just continue collectively wanking themselves off & leave people the hell alone.
Here’s an perfect summary of why we’re witnessing all this insanity, . . .
The Big “Why” Of The Western World
Sad, but pass it on.
Hang onto your feckless COVID face diapers. They may turn out to be good for something.
Why aren’t the members of the World Economic Forum in jail and looking at punishment for Nuremberg violations and mass murder?
[…] Meanwhile… WEF Creates Framework For ‘Responsible’ Facial Recognition […]