The online information landscape has long been dominated by tech giants including Google, Meta and Twitter. But in recent years, a new generation of information platform has gathered momentum. As people increasingly turn to short-form video and generative AI platforms to search for information online, the shape of the information landscape is evolving.
In response, the incumbent platforms are competing to become more relevant and user-friendly, developing at a rapid pace and incorporating generative AI tools into their own offerings, enabling content creators to push out AI-generated material directly to their audience at pace.
AI-generated content and the capacity to mislead
Yet AI-generated content, be it written, audio or visual, has the power to mislead at scale, and synthetic voices, videos and images have become almost entirely indiscernible from the real thing – a phenomenon that carries alarming implications in the context of political and social propaganda and conspiracy theories. A recent study by Stanford University found written content generated by GPT 3.0 was nearly as persuasive for US audiences as content from real foreign covert influence campaigns – with human-machine teaming strategies producing even more convincing results.
AI-generated images are similarly persuasive, with recent research by the Northeastern University’s AI Literacy Lab suggesting that around half of the US public cannot tell the difference between real and AI-generated imagery.
In the past year, we have also witnessed numerous events showing just how powerful synthetic video and audio can be. Audio deepfakes are widely reported to have been a prominent feature of the most recent elections in Slovakia, Pakistan and Bangladesh. Highly sophisticated video deepfake campaigns are also becoming increasingly easy to produce – as demonstrated by a finance worker in Hong Kong, who was duped into paying out USD $25 million to fraudsters using deepfakes to pose as the company’s CFO in a video conference call.
Monetising disinformation
Adding to the risks surrounding the new online information landscape is the potential for disinformation to be disseminated at scale and monetised through social media platforms’ content creator programs.
X’s monetisation of posts through its Twitter Blue subscription service is reported to have fuelled disinformation on the platform. Twitter Blue allows paying subscribers with over five million tweet impressions (views) per month to earn a share of advertising revenue from their post threads. Social media disinformation experts have warned that this provides an economic incentive to amplify emotionally-charged content that will generate views, even when this content is fake or misleading. The proliferation of disinforming posts relating to the Israel-Hamas conflict that have recently appeared on the platform adds weight to this claim.
In response, Elon Musk has announced that content creators whose posts on X get amended by the Community Notes feature, a crowd-sourced fact-checking programme where X’s users can flag posts that may contain disinformation, will no longer be able to monetise those posts. But some question whether this is a case in which prevention would be better than cure, calling for stricter measures to ensure disinformation doesn’t get published in the first place.
Similarly, Media Matters has reported that TikTok’s Creativity Program, which enables creators with high followings to be paid for 60-second videos they generate on the platform, may have led to an increase in conspiracy theory content that performs strongly in engagement-driven algorithms and can be highly profitable. According to Media Matters researchers, this may be encouraging financially-motivated users to exploit the platform’s ability to support the creation of AI-generated material at scale, leading to increased volumes of AI-generated content relating to conspiracy theories reportedly reaching tens of millions of views on TikTok.
YouTube too has reportedly been exploited to allow conspiracy theorists to profit from harmful content. The Center for Countering Digital Hate reported that while YouTube has banned “old” forms of climate denial, content creators have moved on to monetise content with “new” forms of climate-related disinformation, exploiting loopholes in its policies.
Elections risk and platform restrictions
UK Home Secretary James Cleverly recently spoke of fears that malign actors working on behalf of malicious states could use deepfakes to hijack the general election – a fear echoing those of political analysts worldwide, as voters in over 60 countries head to the polls in 2024. We have closely been monitoring the use of AI to disseminate electoral disinformation this year, which you can read more about on our LinkedIn page.
To mitigate this risk, a voluntary pledge was recently signed by 20 technology companies, including TikTok, X and Microsoft, to use actions including collaboration on detection tools to help prevent deceptive AI content from disrupting voting in 2024 elections.
In a further attempt to curb the misuse of their generative AI services, OpenAI, Google, Meta, Anthropic and other key AI platform developers have been placing restrictions on the use of their platforms to create political content. Google has barred its chatbot from returning responses for certain election-related queries, while Meta has banned political advertisers from using its generative AI ad tools.
Questions surrounding the efficacy of such measures remain. Many of the restrictions centre around the use of certain words in prompts, such as those relating to voting, which could prevent the platforms’ use in supporting legitimate and much-needed voter engagement campaigns. In the new information order, where bad actors can already exploit generative AI tools made open source or cheaply available by nefarious actors, such a heavy-handed response may unwittingly prevent such platforms from being maximised to produce and disseminate positive countermeasures.
As the new online information landscape of social media and AI chatbots becomes established, it will nonetheless be vital for the platform owners to maintain the integrity of the platforms that people are using to gather and share information. Failing to do so carries significant risks spanning the social and political spheres, as disinformation threatens to disrupt democracy and society.
Privacy Policy.
Revoke consent.
© Digitalis Media Ltd. Privacy Policy.
Digitalis
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email webrequests@digitalis.com
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all of the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside of it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers), both for Windows and for MAC users.
Despite our very best efforts to allow anybody to adjust the website to their needs, there may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to webrequests@digitalis.com