In the digital age, personal privacy is harder than ever to maintain. The most widely-used form of currency is perhaps neither the pound nor the dollar, but data. Major cases of data misuse, such as in the Cambridge Analytica scandal, have helped raise awareness of the risks people face regarding the use of their own data, with many now exhibiting more caution online through the use of strict security measures.
Despite this shift towards data conservatism, many companies are achieving financial success by selling goods and services to the public that process biometric data. While some firms make it clear that this process is key to their products, such as ancestry and health testing kits, others, such as face-editing apps, offer a service for no monetary cost but collect images of their users without upfront disclosure. Given the sensitive nature of personal biometrics, it’s important that those who use these apps are aware of the potential pitfalls and know how to protect themselves from harm.
The rise of genomics and associated risks
The rise of genomics, with individuals providing DNA samples for testing, has led to widespread biometric data collection. Results often include details of an individual’s ancestry and their genetic predisposition to medical conditions. Due to the personal nature of the data collected, providers claim to have strong safeguards in place to prevent misuse.
However, many of these systems are far from perfect. 23andMe, the world’s most popular personal genetic testing firm, experienced a security breach in late 2023. Initially gaining access to 14,000 accounts, the hackers could then make use of 23andMe’s default user settings, which allow genetic relatives to view each other’s health information, to access a total of 6.9 million accounts.
The leak of personal health data is a serious breach of privacy. As well as having a profound emotional impact on those affected, there may also be a financial impact: the publication of sensitive information on someone’s health may affect their employment status, or even lead to higher insurance premiums. Although some countries have taken legal action to prevent this, such as the 2008 Genetic Information Nondiscrimination Act (GINA) in the US, this is not common practice worldwide. Furthermore, GINA itself is limited in scope, as both life and disability insurers are exempt.
However, there are steps you can take to reduce the risk to your privacy. Securing your account with 2-factor authentication and a unique password is essential to protect such confidential data, and disabling automatic sharing features can drastically limit your exposure to hackers.
Face-editing apps and the data they collect
While genetic testing services are transparent in disclosing their processing of personal biometric information, face-editing apps are often far more opaque. These apps enable users to change the appearance of images in various ways, such as using a filter to make their face appear elderly, or employing an airbrush tool to remove something in the background of a family photo. Some apps have now started integrating AI into their systems to further automate them.
Such apps are largely available to download without any monetary payment – the hidden cost is in the data you give away by using them. They collect data in much the same way as social media platforms do, using cookies to serve targeted adverts to users. But many also reserve the right to keep the data from any photos edited in the app, giving the companies access to images of millions of different faces which they use to fine-tune their products.
This creates numerous privacy risks. If an individual provides enough images of their face from different angles, these firms could virtually generate a new image of them, as AI tools do now. And by uploading edited photos to social media, or linking their social media account to the app, users give the company the ability to identify them.
In theory, legal barriers should prevent these apps from sharing this type of data with third parties. But in reality, the situation is far less clear. Two of the most popular apps in this category were developed in jurisdictions in which government agencies are believed to frequently access private company data. It is a very real possibility that user data from these apps could be used by these agencies, potentially in the development of facial recognition systems.
Avoiding these apps is the most effective way to protect your personal privacy, but if you do use them, ensure you read and understand the company’s data processing policy, as well as whether it is incorporated within a company with strong legal protections. When using any service that requires you to share biometric data, consider the value it provides and whether the risk of sharing your data is proportionate – sharing your data with a well-respected company to address a medical concern is very different to downloading an app from a little-known developer and sharing access to your photos and social media profile.