When we use terms such as disinformation and misinformation today, they tend, and deservingly so, to carry a particularly digital association. Media reporting and social media channels are so saturated with reports and discussion of manipulation of the online information space that it is easy to forget its offline counterpart has long been the subject of deliberate and accidental influence too. By reflecting on a pre-digital disinformation campaign, we are able to draw significant parallels between the tactics deployed then, and those we see today in the modern, digital landscape. Moreover, we may also gain an insight into the future developments of disinformation methods in the age of AI.
In July 1943, the Allies began a covert military deception (MILDEC) operation codenamed Bodyguard. The goal of Bodyguard? To deceive and mislead the Nazi high command as to the time and place of the Allied invasion of mainland Europe. One of the key narratives of Bodyguard was Operation Fortitude South, a disinformation campaign which would seek to convince the Nazis that the invasion of North-West Europe would land in the Pas-de-Calais region.
Operation Fortitude South was a highly effective campaign. Historians argue that this disinformation campaign played a significant role in the success of the Normandy landings by causing the Nazis to misplace their key defences in the Pas-de-Calais region and by preventing the Nazi high-command from committing reinforcements to combat the initial landings. There are, however, three aspects of Operation Fortitude South which, despite taking place in the pre-digital age, are tactics which we can see or may soon see echoed in online disinformation campaigns today.
The art of the unreal
Operation Fortitude South saw the fabrication of the First United States Army Group (FUSAG). FUSAG was an entirely fictional army division created by the Allies to give the impression that an invasion force was being assembled in Southampton, opposite the Pas-de-Calais region. However, while FUSAG’s existence was reported to the Nazi high command through traditional MILDEC channels such as spies and double agents, the Allies sought to create artificial media which would bolster the reports received in Berlin.
Exploiting the skills of the early film industry, the Allies called up set designers and prop technicians from the Shepperton film studios to create cheap dummy military equipment that would look believable on camera. FUSAG came to life in the form of dummy tanks, aircraft, and landing craft made of inflatable rubber canvas around a wooden frame. They even had their own fake fuel depot. In an era before AI-generation, the Allies had created their own synthetic media which to Nazi surveillance planes appeared authentic.
The use of synthetic media in today’s disinformation campaigns is rife. AI tools allow users to generate synthetic images in a matter of seconds with only a short user prompt. Meanwhile, existing media can be manipulated through photo, video, or audio editing software. In 2024, the Institute for Strategic Dialogue reported that the pro-CCP disinformation network ‘Spamouflage’ was sharing a significant volume of AI-generated original images on X. These images pushed narratives relating to urban decay, police brutality, gun violence, and the fentanyl crisis in the United States, and also, it argued, sought to unsettle Americans before the 2024 election and create a sense of division.
While the specific purposes of these uses of manipulated media may differ slightly, the parallel is clear; in disinformation campaigns, creating synthetic media material to accompany a narrative is an effective communication tool to enhance a narrative’s credibility and authenticity.
Open-source misinformation
Thanks to the Shepperton team, FUSAG had military equipment (albeit inflatable); however, it also needed a commander. For this the Allies chose US General George Patton. Patton was well known to the Nazis, having recently led the successful Allied campaign in North Africa, and as such constituted a believable figurehead for the invasion of Europe. The Allies then sought to generate open-source information which would lead the Nazis to further believe he oversaw an invasion force. Patton, accompanied by photographers and news teams, would make inspections of the inflatable FUSAG and even gave speeches before imaginary infantry units. As this content made its way into media reporting and other open sources, it was picked up by Nazi high command, further fuelling their belief that FUSAG was the true invasion force.
Since at least 2016, a network named ‘Endless Mayfly’, believed to originate in Iran, has sought to create and amplify divisive and inaccurate content online. According to a 2019 report by Citizen Lab, the network’s tactics involve creating inauthentic personas to amplify and promote inaccurate content. These personas then sought to directly engage journalists through social media channels, building relationships in the hope of prompting further publishing of false and misleading information. Citizen Lab suggest that Endless Mayfly’s activity led to incorrect media reporting from legitimate outlets. Endless Mayfly’s tactics recognise the importance of open-source material in fuelling and disseminating disinformation narratives.
Signals intelligence and AI
To further convince the Nazis of FUSAG’s credibility, the Allies brought in a full US Army Signals unit who worked around the clock to send false radio messages, reporting training exercises and mock beach landings through Allied communication channels. These false signals were analysed by the Nazis and gave them an incorrect idea as to the size and location of the invasion force. It was the sheer volume of these signals, with operators working long shifts, as well as their location in South East England, which contributed to the deception. It is at this point that we may be able to glimpse a key role AI will play in the future of disinformation campaigns.
A 2023 study by the Harvard Kennedy School Misinformation Review argues that AI’s capability to increase the quantity of mis/disinformation will not produce a meaningful change in the diffusion and consumption of misinformation material. The authors argue along the lines of supply and demand, claiming that increases in the supply of misinformation will only occur if there is an unmet demand, a demand which they don’t believe exists. Instead, they argue that there is already an over-supply of misinformation which is just not being consumed by online users.
However, there is a scenario in which this conclusion does not apply. The authors follow the argument that online users know in advance what they are looking for; that is, that “what makes misinformation consumers special is not that they have privileged access to misinformation but traits that make them more likely to seek out misinformation”. This does not correlate with another aspect of our online experience on social media platforms; users are regularly confronted with content they do not actively seek out. Dramatically increasing the volume of disinformation content, in a way which AI will undoubtedly allow us to do, will increase the likelihood of an individual coming into contact with a given narrative, subsequently compounding the probability of demand for that narrative spiking, potentially over genuine information. Moreover, as demonstrated by Operation Fortitude South, sometimes the sheer volume of noise pertaining to a given narrative is enough to attract attention and mislead. Therefore, AI’s capacity to dramatically increase the volume of a given disinformation narrative in the future should not be ignored.
The above glance shows that, while the channels through which disinformation spreads might have changed over time, the tactics and techniques used are fundamentally similar. History repeats itself, and reflecting on it may allow us to anticipate our own future threats.
Privacy Policy.
Revoke consent.
© Digitalis Media Ltd. Privacy Policy.
Digitalis
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email webrequests@digitalis.com
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all of the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside of it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers), both for Windows and for MAC users.
Despite our very best efforts to allow anybody to adjust the website to their needs, there may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to webrequests@digitalis.com