The rapid rise of AI has revolutionised how people create, share, and consume digital content. One of the most concerning developments is the emergence – and alarming improvement – of deepfakes, which are highly realistic synthetic images, videos, and audio, often generated by AI, to depict real individuals saying or doing things they never actually said or did.
Until recently, deepfakes remained a fringe technological curiosity, but they have quickly become part of everyday online life. Over the past year, their quality and prevalence have surged, and to ordinary viewers or listeners, many deepfake videos and audio clips are now indistinguishable from authentic content. Generative AI tools such as X’s Grok chatbot have recently drawn widespread media scrutiny for being used to produce non-consensual explicit deepfakes, underscoring the serious risks posed by these AI tools and their rapid spread across social media.
As these deepfakes become cheaper, faster, and easier to produce, lawmakers are struggling to keep pace with the harms they enable. Can existing regulatory frameworks keep up with a technology built to undermine the very notion of authenticity?
The need for new regulation
The law often falls back on outdated frameworks, despite deepfakes becoming more common. Existing laws concerning offences such as fraud, harassment, and defamation can deal with some harms after they occur; however, they were never meant to handle synthetic media created on this scale. As a result, enforcement often comes too late and in a piecemeal way.
Company executives, voters, and private individuals have already fallen victim to deepfakes – for example, being targeted in financial scams, disinformation campaigns during elections, and the creation of non-consensual explicit content. Governments are starting to notice that targeted regulation is no longer optional. The challenge, however, lies in designing laws that prevent abuse while preserving innovation and freedom of expression.
Italy’s pioneering deepfake legislation
Several countries are introducing deepfake laws: Denmark gives individuals rights over their likeness; France and the UK require transparency and penalise non‑consensual content, including sexual deepfakes; and South Korea and the US similarly criminalise non‑consensual deepfakes. This reflects an overall global trend towards stricter regulation.
However, Italy has emerged as a notable case study in this evolving legal landscape. In October 2025, it became the first European Union (EU) member state (and one of the first countries globally) to enact a comprehensive national law explicitly addressing deepfakes and synthetic media. This law, Law No. 132/2025, marks a milestone by holding anyone who maliciously creates or distributes deepfake images, videos, or audio that cause unjust harm criminally accountable.
This law goes beyond high-level AI principles; it directly targets the unlawful creation and spread of deceptive deepfake content, making it a criminal offence for the first time in Italy, punishable by one to five years’ imprisonment when serious harm occurs. The legislation emphasises transparency, human oversight, and accountability, particularly in high-risk sectors such as justice, healthcare, and education. It also works to strengthen protections for vulnerable groups, including children, with age-based safeguards and access controls.
Importantly, in Italy, platforms that host user content must now assess whether their site could be used to spread deepfakes. They may need to add detection tools, verify sources, and act quickly to take down harmful content. Anyone who shares an unauthorised deepfake, whether it depicts a real person or falsely represents a company, can also face criminal charges. News organisations, marketing firms, political campaigns, and social media platforms are particularly exposed because of the speed at which they publish content, and this law is expected to drive stricter oversight across these sectors.
When the law struggles to keep up
Despite recent advances, including Italy’s example, deepfakes remain exceptionally difficult to regulate. The technology is evolving far faster than legislation, and generative AI tools move easily across borders. Banning harmful content in one country does little to prevent its creation or spread elsewhere.
There are also practical challenges to enforcement. As generative AI models become more sophisticated, identifying deepfakes with certainty becomes increasingly difficult, and attributing responsibility is often impossible, particularly when content is shared anonymously or through decentralised platforms. Even when wrongdoing is clear, pursuing legal remedies can be slow, particularly where attribution is uncertain and actors operate across jurisdictions.
Globally, approaches to regulating deepfakes remain fragmented. While the EU is pursuing a comprehensive, risk-based regime, other jurisdictions, including the UK, favour narrower or sector-specific rules. This lack of a unified international standard sustains legal uncertainty and, as a result, facilitates the dissemination of harmful deepfakes.
Regulation is not enough: stay alert, be prepared
Although progress has been made, existing regulatory frameworks remain insufficient relative to the speed and scale of the deepfake threat. Even significant measures, such as Italy’s legislation, are unlikely to fully address the risks on their own.
Regulation is essential, but it is only one piece of the puzzle. Cultivating awareness, building preparedness, and strengthening media literacy are just as critical to staying ahead of the threat.
The risks are particularly acute for individuals in prominent or decision-making roles. Executives, politicians, and other public figures are prime targets for deepfakes because their likeness carries inherent credibility. A manipulated video or audio recording can damage reputations long before its authenticity is questioned, and the consequences can be lasting.
Conducting reputational audits to identify publicly available information – such as high-quality images, videos, or audio – that could be exploited by malicious actors to create such synthetic media is a practical first step. Anticipating these risks, particularly how an online footprint can be exploited, is crucial for preparedness in an ever-changing digital environment.
Navigating a world where seeing is not believing
Deepfakes challenge one of the most basic assumptions of the digital age: that seeing is believing. They are not going away anytime soon; if anything, they will only become more sophisticated and harder to detect. Therefore, the law must evolve alongside this technology, supported by international cooperation and informed organisational practices. In the meantime, awareness and preparedness remain essential.
Policing what is real and what is not requires not only stronger regulation but also a collective understanding and active management of a reality in which authenticity can no longer be taken for granted.
Privacy Policy.
Revoke consent.
© Digitalis Media Ltd. Privacy Policy.
Digitalis
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email webrequests@digitalis.com
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all of the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside of it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers), both for Windows and for MAC users.
Despite our very best efforts to allow anybody to adjust the website to their needs, there may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to webrequests@digitalis.com