Recently, several mayors of European capitals held talks via video with a man claiming to be Vitali Klitschko. However, despite looking and sounding like Klitschko, the video was filtered through deepfake technology and he was not in fact the former heavyweight boxing champion turned figurehead for the Russo-Ukrainian war.
Deepfake technology enables a video to be digitally altered so that the person featuring on it appears to be someone else. The technique is typically used to mimic celebrities, such as @deeptomcruise on TikTok, but it is increasingly being used to spread misinformation and for other, even more malicious, causes. The vast majority of deepfake technology is being used against women in the form of involuntary pornography. Estimates from 2021 report that around 90% of victims of deepfakes are women, and the gendered nature of this technology is an important issue to be addressed to protect its victims.
Deepfakes of celebrities and public figures are relatively common because their creation is facilitated by the fact that such individuals are widely-photographed. Computers are “trained” to create deepfakes using multiple images of the individual to be impersonated, adapting the images dynamically and changing the features of the person being captured on video to match those of the celebrity.
Klitschko’s image has been captured and published multiple times over his years as a boxing professional and more recently as a prominent politician, providing ample material for deepfake technology to be applied to him. In this case, the mayors and politicians taking part in the video calls eventually became savvy to the fact they were not talking to the real Klitschko, as the person was contradicting Klitschko’s previous statements. However, for pre-recorded videos this can be much harder to detect, and with deepfake technology becoming more sophisticated and moving closer to being able to produce convincing results from just one photo, the implications are widespread for considerable damage to be done.
How believable are deepfakes?
A study by MIT found that individuals are more likely to believe that an event actually occurred if it is presented in video than if they read about it in textual form. However, the study goes on to say that the video format is not any more likely to influence their opinion or behaviour than the textual content – a small relief, as the current waves of deepfake videos can be difficult to detect or remove by the large platforms hosting them.
A separate study from Australian universities using brain activity measurements has suggested that humans are able to subconsciously tell when something is awry in altered images 54% of the time. However, this subconscious identification did not extend to their conscious thought when asked, with participants in the study only able to verbally identify deepfakes 37% of the time. This suggests that, as long as they are not asked about it, people are likely to filter out more than half of what they see when viewing deepfakes as being an imitation.
If it is hard to fool both our eyes and ears, then fraudsters are more likely to succeed when tricking only one sense, which is what happened last year to a bank manager in Hong Kong who was convinced by a conman in Dubai to transfer over $400,000 by phone. The so-called deepvoice technology used was able to mimic the voice of a company director known to the bank manager, and convince him to send the cash to facilitate an acquisition.
While deepvoice technology is less well-known than deepfakes, it has existed for almost as long, with Adobe demonstrating a prototype product, Voco, back in 2016 that was able to take around 20 minutes of a person’s speech and generate an accurate sound-alike. It is likely that security and legal concerns have shuttered that project, but at its core, it uses the same technology that created Siri.
Before deepvoice was perfected, deepfakes relied on voice actors to mimic the speech of the target. In 2018, actor-director Jordan Peele impersonated Barack Obama’s voice in a viral deepfake video reproduction of the former president. The ability of the technology to produce both video and speech automatically signifies a rapid growth in its potential since then.
Changing the face of politics
As the technology advances, deepfakes are increasingly being targeted towards political goals. The technology is already being actively used by campaign staff in South Korea, where former Prosecutor General Yoon Suk-yeol recently narrowly won the presidency after his campaign staffers used a deepfake version of him, dubbed AI Yoon, to engage younger voters by answering political questions. This open use of deepfake technology is arguably a legitimate way to connect with audiences, and not just for politicians. However, judging from the comments left under many of the videos, it is not clear that all voters understood that the videos were deepfake creations rather than actual footage of Yoon.
While research shows that deepfakes can usually be identified, the implications for them to be used as cybercrime to disrupt the political sphere are cause for concern. Much of the onus on addressing these concerns is on tech companies and their ability to train their algorithms to identify and flag deepfakes. As with all cybercrime, there is a race to stay one step ahead of the technology. In the meantime, educating, raising awareness, and training high-profile figures and the public to be wary of the risks is the best strategy.
Privacy Policy.
Revoke consent.
© Digitalis Media Ltd. Privacy Policy.
Digitalis
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email webrequests@digitalis.com
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all of the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside of it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers), both for Windows and for MAC users.
Despite our very best efforts to allow anybody to adjust the website to their needs, there may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to webrequests@digitalis.com