Loading Logo

Disinformation risk in the new online information landscape

March 2024
 by Carys Whomsley

Disinformation risk in the new online information landscape

March 2024
 By Carys Whomsley

The online information landscape has long been dominated by tech giants including Google, Meta and Twitter. But in recent years, a new generation of information platform has gathered momentum. As people increasingly turn to short-form video and generative AI platforms to search for information online, the shape of the information landscape is evolving.

In response, the incumbent platforms are competing to become more relevant and user-friendly, developing at a rapid pace and incorporating generative AI tools into their own offerings, enabling content creators to push out AI-generated material directly to their audience at pace.

AI-generated content and the capacity to mislead

Yet AI-generated content, be it written, audio or visual, has the power to mislead at scale, and synthetic voices, videos and images have become almost entirely indiscernible from the real thing – a phenomenon that carries alarming implications in the context of political and social propaganda and conspiracy theories. A recent study by Stanford University found written content generated by GPT 3.0 was nearly as persuasive for US audiences as content from real foreign covert influence campaigns – with human-machine teaming strategies producing even more convincing results.

AI-generated images are similarly persuasive, with recent research by the Northeastern University’s AI Literacy Lab suggesting that around half of the US public cannot tell the difference between real and AI-generated imagery.

In the past year, we have also witnessed numerous events showing just how powerful synthetic video and audio can be. Audio deepfakes are widely reported to have been a prominent feature of the most recent elections in Slovakia, Pakistan and Bangladesh. Highly sophisticated video deepfake campaigns are also becoming increasingly easy to produce – as demonstrated by a finance worker in Hong Kong, who was duped into paying out USD $25 million to fraudsters using deepfakes to pose as the company’s CFO in a video conference call.

Monetising disinformation

Adding to the risks surrounding the new online information landscape is the potential for disinformation to be disseminated at scale and monetised through social media platforms’ content creator programs.

X’s monetisation of posts through its Twitter Blue subscription service is reported to have fuelled disinformation on the platform. Twitter Blue allows paying subscribers with over five million tweet impressions (views) per month to earn a share of advertising revenue from their post threads. Social media disinformation experts have warned that this provides an economic incentive to amplify emotionally-charged content that will generate views, even when this content is fake or misleading. The proliferation of disinforming posts relating to the Israel-Hamas conflict that have recently appeared on the platform adds weight to this claim.

In response, Elon Musk has announced that content creators whose posts on X get amended by the Community Notes feature, a crowd-sourced fact-checking programme where X’s users can flag posts that may contain disinformation, will no longer be able to monetise those posts. But some question whether this is a case in which prevention would be better than cure, calling for stricter measures to ensure disinformation doesn’t get published in the first place.

Similarly, Media Matters has reported that TikTok’s Creativity Program, which enables creators with high followings to be paid for 60-second videos they generate on the platform, may have led to an increase in conspiracy theory content that performs strongly in engagement-driven algorithms and can be highly profitable. According to Media Matters researchers, this may be encouraging financially-motivated users to exploit the platform’s ability to support the creation of AI-generated material at scale, leading to increased volumes of AI-generated content relating to conspiracy theories reportedly reaching tens of millions of views on TikTok.

YouTube too has reportedly been exploited to allow conspiracy theorists to profit from harmful content. The Center for Countering Digital Hate reported that while YouTube has banned “old” forms of climate denial, content creators have moved on to monetise content with “new” forms of climate-related disinformation, exploiting loopholes in its policies.

Elections risk and platform restrictions

UK Home Secretary James Cleverly recently spoke of fears that malign actors working on behalf of malicious states could use deepfakes to hijack the general election – a fear echoing those of political analysts worldwide, as voters in over 60 countries head to the polls in 2024. We have closely been monitoring the use of AI to disseminate electoral disinformation this year, which you can read more about on our LinkedIn page.

To mitigate this risk, a voluntary pledge was recently signed by 20 technology companies, including TikTok, X and Microsoft, to use actions including collaboration on detection tools to help prevent deceptive AI content from disrupting voting in 2024 elections.

In a further attempt to curb the misuse of their generative AI services, OpenAI, Google, Meta, Anthropic and other key AI platform developers have been placing restrictions on the use of their platforms to create political content. Google has barred its chatbot from returning responses for certain election-related queries, while Meta has banned political advertisers from using its generative AI ad tools.

Questions surrounding the efficacy of such measures remain. Many of the restrictions centre around the use of certain words in prompts, such as those relating to voting, which could prevent the platforms’ use in supporting legitimate and much-needed voter engagement campaigns. In the new information order, where bad actors can already exploit generative AI tools made open source or cheaply available by nefarious actors, such a heavy-handed response may unwittingly prevent such platforms from being maximised to produce and disseminate positive countermeasures.

As the new online information landscape of social media and AI chatbots becomes established, it will nonetheless be vital for the platform owners to maintain the integrity of the platforms that people are using to gather and share information. Failing to do so carries significant risks spanning the social and political spheres, as disinformation threatens to disrupt democracy and society.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.