Loading Logo

Ask the Expert: Carys Whomsley, Director of Digital Risk

December 2023
 by Carys Whomsley

Ask the Expert: Carys Whomsley, Director of Digital Risk

December 2023
 By Carys Whomsley

Digitalis’s Digital Risk Director, Carys Whomsley, takes a few minutes to answer some questions on the hot topic of technology’s increasing impact on democracy, in light of next year’s elections. She discusses the risks presented by AI to democracy and social stability, and how people can navigate the information landscape in the coming year.

1. We’ve been hearing lately about the increasing threat posed by artificial intelligence to democracy and social stability. What are your thoughts on this, and what trends are you seeing in this area? What can we do as internet users to navigate this information landscape in the coming year?

2024 is set to be the biggest election year in history, with 40 countries scheduled to hold national elections, as well as the European Parliament. The year will reveal the global state of democracy following years of increased polarisation, disinformation and distrust in the information environment, and some shock electoral victories across the world this year. Many of the risks presented by AI that we previously predicted materialised in the context of elections in Slovakia and Türkiye this year, with a rise in the exploitation of deepfakes to spread disinformation on social media platforms.

Changes in search behaviour, platform moderation approaches, and rapid advances in synthetic content creation tools, have also led to the proliferation of an unprecedented amount of disinforming and misleading content online. This year’s examples include the exploitation of deepfakes and audio deepfakes to influence elections, automated and sophisticated content generation for botnets, and the use of personalised large language models (LLMs) by extremist groups. LLMs may be manipulated in other ways to influence election outcomes, such as through the creation of corrupted fact-checkers that are designed to appear neutral, but share specific disinformation.

Meanwhile, the momentum behind efforts to moderate the spread of disinformation through fact-checking platforms has been waning for years – as, it seems, has many social media platforms’ interest in combatting false narratives. On X, for example, a new revenue-sharing feature rewards X Premium users with high numbers of impressions on their posts, which may lead to its users sharing inflammatory synthetic content for cash. Meanwhile, the EU has recently demanded that Meta and TikTok detail their efforts to curb disinformation and illegal content on their platforms.

2. What can we do as internet users to navigate this information landscape in the coming year?

As much of our online activity takes place on platforms that are rife with disinformation and deceptively easy to manipulate, individuals need to assess their information sources carefully to ensure they are not deceived.

It is important to be aware of how easy it is to inadvertently spread misinformation – content that is misleading or false, but which is shared in the belief it is true. Trusted friends and accounts may unknowingly share false information, so it is crucial to fact-check any content that provokes a strong visceral or emotional response, such as anger, fear or compassion, which lowers our ability to apply critical or contextual thinking. If a post plays on sensitive or divisive themes, investigate the content further before believing or resharing it.

To investigate potentially misleading content, check other sources for it, and see how it has proliferated online. Images and video in particular are often mis-captioned, manipulated or taken out of context from previous unrelated events, and disinforming synthetic content often originates from accounts purpose-built to share specific narratives, supported by armies of botnets.

While you can no longer always believe your eyes and ears, it is, however, important not to approach every piece of information with distrust, and learn to make your own assessments of the credibility of certain claimed events. For instance, public figures who are genuinely recorded carrying out any kind of misdeed could benefit from the “liar’s dividend”, and claim that the recording is a deepfake.

Developing a critical mindset, learning to check the source of information, examining the evidence, and learning to apply your own contextual knowledge to claims made in the context of elections, will all go a long way towards helping you filter what’s real from what’s fake.

3. Are there any benefits that AI could bring to increase democratic engagement?

There is nothing inherently bad about AI and social media, and millions of people gain real benefits from their use every day. To help with the fight against disinformation, we need to meet our audiences where they already are.

Social media is a vector for information sharing, and AI is a tool for content production. Combined, they present a powerful toolbox for effective education and engagement. They are increasingly easy to use, so even with limited tech skills, great outreach campaigns can now be produced at scale in a short space of time. LLMs in particular can help create responses to disinformation campaigns as they are identified, tailored to specific audiences on different platforms. If we fail to appreciate and act on these opportunities, we risk falling behind, enabling hostile actors to manipulate online platforms to further their goals.

AI models that produce synthetic content can also be used to support online campaigns that inform and engage citizens about democratic processes at scale, and rapidly, requiring minimal time and resources. This frees up resources and time for work requiring human interaction and thought, such as fact-checking phone lines and education programmes.

4. Finally, how can AI play a role in disinformation investigations?

While the most popular chatbots are not particularly good at fact-checking, advances in personalised chatbots could lead to their use as fact-checking models, trained on information regarding existing disinforming narratives and misconceptions, while providing accurate information regarding candidates and political parties’ stated policies.

In addition, when it comes to monitoring, understanding, predicting and countering disinformation campaigns, AI models fill investigations gaps and allow us to map, monitor and trace the networks polluting the information environment with influence operations. This includes automating the analysis of disinformation campaigns to find their origin and the modus operandi of the groups launching them, even when they are attempting to evade detection, including video and audio content in foreign languages.

There are also many other tools that can be used to investigate disinformation campaigns, such as facial recognition technology, which can identify key drivers of hostile networks. With such a range of tools to help counter the misuse of social media and AI and reduce the threat to electoral democracy, I hope to see effective collaboration between the public, private and third sectors, to enable these threats to be minimised in the face of the upcoming 2024 elections.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.