Deepfake technology burst back on the scene at the end of February, following the publication of three deepfake videos of Tom Cruise on TikTok. The videos went viral, amassing over 11 million views, leading many to believe the Hollywood actor had joined the ByteDance-owned social media platform – despite the page being named “deeptomcruise”. The videos were quickly removed, but as the saying goes, this lie had travelled halfway around the world while the truth was still putting on its shoes.
At Digitalis, we have been alerting clients to the threat of deepfake technology for several years. Deepfakes originally resided in the domain of Hollywood film studios, where editors used highly sophisticated technology and skilled individuals to stitch images together. But recently, deepfakes have found their way on to our smartphones through apps that enable users to place an image of their chosen face into a famous film or music video – with increasingly convincing results. Deepfake technology has well and truly arrived in the mainstream.
The potential of deepfakes to cause harm
In our first newsletter of 2021, our Associate Director Celine MacDougall focused on the proliferation of fake news and its impact on individuals, businesses and governments, as well as the methods that can be used to halt its spread. While the focus of fake news has largely been on the spread of written misinformation and disinformation, deepfakes are a new tool in the armoury of those wishing to do harm. In the wrong hands, deepfake technology can cause significant damage – emotionally, commercially and societally.
The implications for the victims – whether individuals, businesses or governments – are huge. While deepfake technology will likely be used as a harmless fun among many Gen Z users, its potential to spread misinformation is of serious concern to us all. In the wrong hands, its ability to be used for corporate fraud, extortion, market manipulation, political unrest and even playground bullying could affect our workplaces, our homes and our family lives.
In February this year, a Texas lawyer accidentally used a filter on Zoom that turned his appearance into that of a cat. While this was comical and relatively harmless, it is plausible that users could find a way to upload their own customisable filters, posing convincingly as someone else during a live video call. As seen from the Tom Cruise videos, deepfakes can replicate voices, faces and mannerisms. It is not too big a leap to see how this technology, in different hands, could potentially enable deepfake creators to hoodwink others into providing sensitive information, for example by posing as a trusted CEO, partner, doctor or even a child.
Protecting yourself against deepfake attacks
Deepfake attacks are still an unknown to many businesses and individuals, and protecting yourself, your family, and your business against them is no easy task. Mitigating against an attack that at once undermines our senses of sight and sound presents multiple challenges for which there is no simple solution.
At this moment in time, there are no widespread technical defences that enable consumers to prove a video’s authenticity. The best form of defence for an organisation is therefore to improve authentication policies and increase awareness of the possibility of a deepfake attack. There is a downside to this approach, though – it can create mistrust within an organisation, as people no longer automatically have faith in the authenticity of the voice at the other end of the phone or the face on their screen.
The potential threat to remote working
The COVID-19 pandemic prompted many of us to turn to remote working, using video conferencing platforms such as Zoom or Teams to work with others across multiple locations. McKinsey’s digital research team recently published a fascinating article titled “How COVID-19 has pushed companies over the technology tipping point – and transformed business forever”, citing our digital adoption of video conferencing as a key driver. But deepfake technology has the ability to undermine this progress: should businesses and individuals not have certainty over the identity of the individual they see before them on their screen (or hear on the phone), the need for increased face-to-face interaction may return.
The fact that technical advances in deepfake technology could be the catalyst for reversing the effect of other technical advances – in video conferencing technology – that have enabled us to work remotely during the pandemic, is a deeply ironic one. It remains to be seen how this threat will play out, and whether improvements in authentication be able to mitigate it for individuals and businesses. As technological advances are made on both sides, it will be interesting to see whether deepfake technology will threaten businesses and individuals on an increasing scale – and whether advances can be made to thwart such attempts.