Loading Logo

Policing the unreal: can the law keep up with deepfakes?

March 2026
 by Nicole Trofimov

Policing the unreal: can the law keep up with deepfakes?

March 2026
 By Nicole Trofimov

The rapid rise of AI has revolutionised how people create, share, and consume digital content. One of the most concerning developments is the emergence – and alarming improvement – of deepfakes, which are highly realistic synthetic images, videos, and audio, often generated by AI, to depict real individuals saying or doing things they never actually said or did.

Until recently, deepfakes remained a fringe technological curiosity, but they have quickly become part of everyday online life. Over the past year, their quality and prevalence have surged, and to ordinary viewers or listeners, many deepfake videos and audio clips are now indistinguishable from authentic content. Generative AI tools such as X’s Grok chatbot have recently drawn widespread media scrutiny for being used to produce non-consensual explicit deepfakes, underscoring the serious risks posed by these AI tools and their rapid spread across social media.

As these deepfakes become cheaper, faster, and easier to produce, lawmakers are struggling to keep pace with the harms they enable. Can existing regulatory frameworks keep up with a technology built to undermine the very notion of authenticity?

The need for new regulation

The law often falls back on outdated frameworks, despite deepfakes becoming more common. Existing laws concerning offences such as fraud, harassment, and defamation can deal with some harms after they occur; however, they were never meant to handle synthetic media created on this scale. As a result, enforcement often comes too late and in a piecemeal way.

Company executives, voters, and private individuals have already fallen victim to deepfakes – for example, being targeted in financial scams, disinformation campaigns during elections, and the creation of non-consensual explicit content. Governments are starting to notice that targeted regulation is no longer optional. The challenge, however, lies in designing laws that prevent abuse while preserving innovation and freedom of expression.

Italy’s pioneering deepfake legislation

Several countries are introducing deepfake laws: Denmark gives individuals rights over their likeness; France and the UK require transparency and penalise non‑consensual content, including sexual deepfakes; and South Korea and the US similarly criminalise non‑consensual deepfakes. This reflects an overall global trend towards stricter regulation.

However, Italy has emerged as a notable case study in this evolving legal landscape. In October 2025, it became the first European Union (EU) member state (and one of the first countries globally) to enact a comprehensive national law explicitly addressing deepfakes and synthetic media. This law, Law No. 132/2025, marks a milestone by holding anyone who maliciously creates or distributes deepfake images, videos, or audio that cause unjust harm criminally accountable.

This law goes beyond high-level AI principles; it directly targets the unlawful creation and spread of deceptive deepfake content, making it a criminal offence for the first time in Italy, punishable by one to five years’ imprisonment when serious harm occurs. The legislation emphasises transparency, human oversight, and accountability, particularly in high-risk sectors such as justice, healthcare, and education. It also works to strengthen protections for vulnerable groups, including children, with age-based safeguards and access controls.

Importantly, in Italy, platforms that host user content must now assess whether their site could be used to spread deepfakes. They may need to add detection tools, verify sources, and act quickly to take down harmful content. Anyone who shares an unauthorised deepfake, whether it depicts a real person or falsely represents a company, can also face criminal charges. News organisations, marketing firms, political campaigns, and social media platforms are particularly exposed because of the speed at which they publish content, and this law is expected to drive stricter oversight across these sectors.

When the law struggles to keep up

Despite recent advances, including Italy’s example, deepfakes remain exceptionally difficult to regulate. The technology is evolving far faster than legislation, and generative AI tools move easily across borders. Banning harmful content in one country does little to prevent its creation or spread elsewhere.

There are also practical challenges to enforcement. As generative AI models become more sophisticated, identifying deepfakes with certainty becomes increasingly difficult, and attributing responsibility is often impossible, particularly when content is shared anonymously or through decentralised platforms. Even when wrongdoing is clear,  pursuing legal remedies can be slow, particularly where attribution is uncertain and actors operate across jurisdictions.

Globally, approaches to regulating deepfakes remain fragmented. While the EU is pursuing a comprehensive, risk-based regime, other jurisdictions, including the UK, favour narrower or sector-specific rules. This lack of a unified international standard sustains legal uncertainty and, as a result, facilitates the dissemination of harmful deepfakes.

Regulation is not enough: stay alert, be prepared

Although progress has been made, existing regulatory frameworks remain insufficient relative to the speed and scale of the deepfake threat. Even significant measures, such as Italy’s legislation, are unlikely to fully address the risks on their own.

Regulation is essential, but it is only one piece of the puzzle. Cultivating awareness, building preparedness, and strengthening media literacy are just as critical to staying ahead of the threat.

The risks are particularly acute for individuals in prominent or decision-making roles. Executives, politicians, and other public figures are prime targets for deepfakes because their likeness carries inherent credibility. A manipulated video or audio recording can damage reputations long before its authenticity is questioned, and the consequences can be lasting. 

Conducting reputational audits to identify publicly available information – such as high-quality images, videos, or audio – that could be exploited by malicious actors to create such synthetic media is a practical first step. Anticipating these risks, particularly how an online footprint can be exploited, is crucial for preparedness in an ever-changing digital environment.

Navigating a world where seeing is not believing

Deepfakes challenge one of the most basic assumptions of the digital age: that seeing is believing. They are not going away anytime soon; if anything, they will only become more sophisticated and harder to detect. Therefore, the law must evolve alongside this technology, supported by international cooperation and informed organisational practices. In the meantime, awareness and preparedness remain essential.

Policing what is real and what is not requires not only stronger regulation but also a collective understanding and active management of a reality in which authenticity can no longer be taken for granted.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.