Loading Logo

Generative AI – a licence to libel?

September 2023
 by David Engel

Generative AI – a licence to libel?

September 2023
 By David Engel

Any individual or company seeking to protect their reputation from false allegations in the online world already has a number of issues to contend with. But a new threat has now emerged in the shape of generative AI, and the use of content created by chatbots such as ChatGPT.

In June 2023, for example, Georgia radio host Mark Walters started legal proceedings in the US against OpenAI, the company behind ChatGPT, after the bot answered a query from a journalist by stating (incorrectly) that Mr Walters had been sued for “defrauding and embezzling funds”.

Meanwhile, a law professor at George Washington University was falsely accused by ChatGPT of sexually assaulting and harassing students, and an Australian mayor of having done jail time for bribery.

In its recent White Paper, the Government has made clear that there are no plans to create new AI-specific legal rights in the UK, so victims of an AI libel are left with the same rights and remedies as they would have in relation to a defamatory newspaper article or website.

The technology therefore throws up some interesting issues which will doubtless play out in the Royal Courts of Justice in the coming years.

Who gets sued?

Both the ‘author’ and the ‘publisher’ of a defamatory statement may be sued. The company behind an application like ChatGPT is arguably both author and publisher. But potentially so is the journalist, blogger or private individual who uses ChatGPT to provide content that turns out to be untrue, or is deliberately fabricated, such as deepfake pictures and videos.

Defamation is a ‘strict liability’ tort, meaning that intention is immaterial.  So it would be no defence for the publisher of inaccurate AI-generated content to claim it did not realise the content was untrue.

And ‘publisher’ for these purposes is not confined to commercial publishers; it could be virtually any business or individual involved in circulating a libellous allegation, e.g. by retweeting (or re-Xing) an AI-generated deepfake.

What makes a statement defamatory?

If an AI-generated ‘statement’ makes or implies an allegation defamatory of a third party, which causes, or is likely to cause, serious harm to her or his reputation, the victim will be able to sue in defamation. That would include, for example, a deepfake picture or video.

There is no room in defamation law for legalistic or literal interpretations of what is said. The legal test is what it would be understood to mean to the average reasonable reader (or viewer). As such, a deepfake video ridiculing or denigrating a person will be actionable if viewers do not realise that it is fake.

Does it matter in the real world?

To be actionable, a defamatory statement needs to be published to one or more third parties. ChatGPT can say what it likes about you, to you. It’s only when there is publication to a third party that the legal line is crossed.

But it’s not hard to think of scenarios where that may happen. For example, private providers of due diligence checks, and indeed compliance teams in banks and other financial institutions, sometimes rely on unverified media stories and online allegations to ‘red flag’ individuals, who may then find it difficult to do business or may have their banking facilities withdrawn.

That is already a live issue, but if those due diligence platforms are now tempted to use generative AI to do the same task, it could become a much bigger issue: anyone incorrectly red-flagged by AI-driven due diligence would potentially have a claim. And where that red flag has caused them a financial loss (for example, because a counterparty has withdrawn from a planned transaction as a result), they could be entitled to very substantial compensation.

Can AI platforms disclaim liability for defamation?

ChatGPT users are shown a warning that the content generated may include “inaccurate information about people, places, or facts”. Similarly, Google has stated that its Bard chatbot “is not yet fully capable of distinguishing between what is accurate and inaccurate information”.

Such disclaimers may help to protect an AI platform from a legal claim by a user who has relied on information that turns out to be wrong, but they are unlikely to provide much cover in the event of being sued by a third party who has been libelled.

Viewed through the lens of a libel claim, such disclaimers start to look more like an admission of guilt.

David Engel leads the Reputation & Information Protection practice at law firm Addleshaw Goddard LLP.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.