For the last decade, police forces in Europe have had access to a goldmine of personal data to facilitate crime prevention and investigation. The true crime phenomenon in popular culture has enlightened us all to the vital role played by fingerprints, DNA fibres and dental impressions in criminal deduction. Now burgeoning technology in the realm of biometrics is set to revolutionise modern-day policing practices. European lawmakers are primed to endorse a set of legislative norms that will give police forces access to millions of photos of people’s faces, creating a new facial recognition network that spans the continent.
The details of this legislative overhaul were first released in December 2021 within the Prüm II data-sharing proposals, which look to improve cross-border cooperation and empower police officers in the EU with state-of-the-art deductive tools. Criticism over the plans has emerged, with the tension between privacy and security at the forefront of arguments against the implementation of such an arguably insidious system.
Prüm and policing
The original Prüm protocol was introduced in the early 2000s to enable European nations to share data to address cross-border crime. It has so far allowed for the automated exchange of DNA, fingerprints, and vehicle registration data. But as forensic science (and the technology enabling it) has advanced in recent years, many of the original technical specifications of the Prüm protocol have become outdated.
An updated version, Prüm II, will be introduced later this year, with the addition of new data for sharing, including facial photographs. This proposed change has raised questions about data protection and the risks of infringing on individual freedoms.
The implementation of facial recognition
The Prüm II proposals will allow a nation to compare photos against the databases of other countries signed up to the agreement. This will include images of suspects, those convicted of crimes, asylum seekers and other individuals, all linked through a central router that will serve as a message broker between police forces.
The use of facial recognition from live cameras connected to public spaces (smart cameras) has faced criticism. Concerns include unreliable levels of accuracy, the risk of amplifying discrimination, and privacy rights infringements. While Prüm II does not propose to include information from smart cameras, it will allow the use of retrospective facial recognition: still images taken from CCTV cameras, photos from social media, or those on a victim’s phone.
Complications and their implications
Policy advisor Ella Jakubowska, from non-profit European Digital Rights (EDRi), makes the argument that retrospective recognition can be just as problematic: “When you apply facial recognition to footage or images retrospectively, the damage can sometimes be even greater, because of the ability to go back, for example, to a demonstration three years ago, or to see who I met five years ago, because I am now a political opponent.”
The issue of misidentification and accuracy is also a strong cause for objection. Many cities in the US have banned the use of such technology for its unreliable accuracy, and the hugely detrimental impact on wrongfully accused parties.
Most concerning for Jakubowska is that the proposed expansion to the legislation may even come to incentivise the establishment of facial recognition databases in Member States that have so far chosen not to carry out such surveillance within their communities.
Such concerns are highlighted by examples throughout the continent, including a case in 2021 in which police in the Netherlands were required to delete 218,000 photos that it had wrongly included in its facial recognition database. Even more recently, the UK’s data watchdog fined the facial recognition company Clearview AI, whose previous clients included the Metropolitan Police and the National Crime Agency, for collecting images of people from social media platforms and the web to add to a global database.
While biometric data can be a powerful tool in preventing and investigating crime, it is essential that it is used with care. Potentially intrusive technologies such as these require serious scrutiny themselves, to ensure privacy and security are maintained, and the technologies are not used in ways that hinder justice. With the global biometrics industry projected to be worth between $68bn and $82bn in the coming years, it’s vital that such personal and powerful data is not misused. From a data privacy point of view, many people feel uncomfortable about the concept of degrading our faces, central to our identity, into data points that can be sold or shared.