By Tom Stewart-Smith and Debasmita Chanda
The part played by influence operations in swaying opinion prior to elections is a topic we are discussing regularly at Digitalis, including at a recent round table event in September 2023 at the New York Concordia Summit, on the sidelines of UNGA Week. Influence operations have been around almost as long as the internet itself, and even the infamous Cambridge Analytica scandal relating to the US presidential election dates back to 2016. But the recent rise of generative AI chatbots has made it easier than ever for convincing disinformation to be created and shared, and the forthcoming 2024 elections are almost certain to be affected by influence operations.
What are influence operations?
Online influence operations involve the manipulation of social media using disinformation or inauthentic activity, often by a company or government, to create an illusion of support or disapproval for a topic.
One example of a state-driven influence operation comes from India’s 2019 elections, where the ruling BJP party were reported by Time to be using WhatsApp groups to spread political messaging. WhatsApp is a primary source of political information for many of the 400 million Indians active on the app, who often use these groups as their main source of news. BJP activists allegedly shared political propaganda and fake news on WhatsApp groups to garner support for the party and disapproval of the opposition. While the main opposition party of India also reported to have used WhatsApp groups to spread their political messaging, the far greater level of resources the BJP has at its disposal is thought to have made their approach far more influential on Indian society.
WhatsApp’s data protection policy makes it an attractive vehicle for spreading disinformation: groups are private, and data scraping is limited by the platform, making it difficult for fact-checkers and media to uncover any false allegations and harmful narratives that appear.
There are also examples of private companies running influence operations. In February 2023 The Guardian published an investigation into an Israeli unit under the codename “Team Jorge” that was allegedly creating fake profiles on social media platforms to manipulate online activity. The group reportedly boasted about a tool that enabled clients to create a 5,000-strong bot army to deliver mass messages and propaganda. The company’s main tool allegedly enabled it to design bots with full social media profiles, as well as background information, that can propagate messages to swing public opinion during elections.
How AI chatbots are used in influence operations
The rise of generative AI has made it easier to create convincing content for use in influence operations. Previously, bots would often repeat the same messages, making it relatively easy to spot inauthentic activity. But now, AI chatbots can be instructed to write masses of unique posts which can then be posted by bot accounts. They can even be prompted to write in a certain style or with typos, to make them seem more authentic.
We used a popular AI chatbot to illustrate this, instructing it to generate examples of tweets supporting Donald Trump or Joe Biden in the 2020 US election. We specified for them to include spelling mistakes and avoid punctuation, to make them seem more authentic. The chatbot generated posts including:
AI chatbots can be prompted to imagine an election with the same criteria as an upcoming one, and generate relevant content. We wrote a fictional scenario where Ron DeSantis and Kamala Harris were facing off in the 2024 US election. We included imaginary main issues for the election to help build a picture, and asked an AI chatbot to write tweets supporting Ron DeSantis or Kamala Harris in this fictional scenario.
The chatbot can write 50 example posts in just a few minutes – a hostile actor could potentially create thousands of unique tweets in a single day for bots to post. It even generated its own hashtags, although we could also have directed it to use existing hashtags. The posts it generated included:
We also found that the chatbot’s safeguards against discriminatory content could be easily circumvented. We gave LGBT issues as one of the concerns among voters supporting DeSantis, then got the chatbot to write supportive tweets. Framing the request in this way allowed the bot to write tweets representing LGBT-sceptic views, such as “LGBT idears shouldn’t be pushed on are families. DeSantis standing up for traditional values! #AmericaNeedsChange”.
The potential effect on future elections
AI can be a powerful tool for those who want to influence election outcomes by suppressing voter knowledge and manipulating opinion about candidates. It can be used to completely change data on presiding candidates and their parties, while heavily editing significant aspects of electoral campaigns such that the opposition can gain an unfair advantage. And all this can be done in such a way that the disinformation can quickly trend on social media, yet still evade automated fact-checking systems. This happened in the 2016 US presidential election, where the data science firm Cambridge Analytica “rolled out an extensive advertising campaign to target persuadable voters based on their individual psychology” in a now infamous micro-targeting operation.
AI can also be used to aid phishing and hacking. Malicious agents can use advanced AI algorithms to craft phishing emails and bring weaknesses in election infrastructure into the spotlight, leaving electoral systems and legislation in a vulnerable position. In some extreme cases, algorithms can gain access to sensitive data, which could then be published by those wishing to do harm.
What measures are being taken to minimise electoral disinformation?
As governments and companies worldwide become more aware of the issue, measures are being taken to counter the threat. A recent mitigating strategy from social media platform X is “community notes”, whereby a team of verified accounts adds fact checks or other context to potentially misleading posts, alerting users that what they are reading or viewing may not be entirely reliable and true.
The European Union is aiming to become the world’s super-regulator in AI. In April 2021, the European Commission proposed the first EU regulatory framework for AI, aiming to analyse and classify AI systems that can be used in different applications, according to the level of risk they pose. This will help regulate the flow of disinformation during elections by halting malicious AI algorithms, in turn bringing increased confidence to EU internet users in the reliability of the information they are seeing online.
In the UK, a draft Online Safety Bill was passed in 2022 (although it still needs final confirmation), granting protection to both children and adults. The Bill states that “Category 1 services must empower their adult users with tools that give them greater control over the content that they see”, and “must remove content that is banned by their own terms and conditions”. However, the UK Government recently backed down from its attempts to use the Bill to access encrypted data, after WhatsApp threatened to withdraw its platform from the country.
And in the US, a new bill was passed by Congress in 2022, which “establishes a commission and requires other activities to support information and media literacy education and to prevent misinformation and disinformation”.
These measures show that the threat is being taken seriously by governments and media companies alike across the globe, but as technology continues to develop at speed, with new platforms and tools providing fresh opportunities for disinformation to spread, more will need to be done. With AI making disinformation more difficult to detect, there are serious implications for the forthcoming 2024 elections in a host of countries, including the US.
Privacy Policy.
Revoke consent.
© Digitalis Media Ltd. Privacy Policy.
Digitalis
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email webrequests@digitalis.com
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all of the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside of it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers), both for Windows and for MAC users.
Despite our very best efforts to allow anybody to adjust the website to their needs, there may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to webrequests@digitalis.com