Loading Logo

Disinformation then and now: what can historic campaigns tell us about future risks?

June 2025
 by Augustine McMahon

Disinformation then and now: what can historic campaigns tell us about future risks?

June 2025
 By Augustine McMahon

When we use terms such as disinformation and misinformation today, they tend, and deservingly so, to carry a particularly digital association. Media reporting and social media channels are so saturated with reports and discussion of manipulation of the online information space that it is easy to forget its offline counterpart has long been the subject of deliberate and accidental influence too. By reflecting on a pre-digital disinformation campaign, we are able to draw significant parallels between the tactics deployed then, and those we see today in the modern, digital landscape. Moreover, we may also gain an insight into the future developments of disinformation methods in the age of AI.

In July 1943, the Allies began a covert military deception (MILDEC) operation codenamed Bodyguard. The goal of Bodyguard? To deceive and mislead the Nazi high command as to the time and place of the Allied invasion of mainland Europe. One of the key narratives of Bodyguard was Operation Fortitude South, a disinformation campaign which would seek to convince the Nazis that the invasion of North-West Europe would land in the Pas-de-Calais region.

Operation Fortitude South was a highly effective campaign. Historians argue that this disinformation campaign played a significant role in the success of the Normandy landings by causing the Nazis to misplace their key defences in the Pas-de-Calais region and by preventing the Nazi high-command from committing reinforcements to combat the initial landings. There are, however, three aspects of Operation Fortitude South which, despite taking place in the pre-digital age, are tactics which we can see or may soon see echoed in online disinformation campaigns today.

The art of the unreal

Operation Fortitude South saw the fabrication of the First United States Army Group (FUSAG). FUSAG was an entirely fictional army division created by the Allies to give the impression that an invasion force was being assembled in Southampton, opposite the Pas-de-Calais region. However, while FUSAG’s existence was reported to the Nazi high command through traditional MILDEC channels such as spies and double agents, the Allies sought to create artificial media which would bolster the reports received in Berlin.

Exploiting the skills of the early film industry, the Allies called up set designers and prop technicians from the Shepperton film studios to create cheap dummy military equipment that would look believable on camera. FUSAG came to life in the form of dummy tanks, aircraft, and landing craft made of inflatable rubber canvas around a wooden frame. They even had their own fake fuel depot. In an era before AI-generation, the Allies had created their own synthetic media which to Nazi surveillance planes appeared authentic.

The use of synthetic media in today’s disinformation campaigns is rife. AI tools allow users to generate synthetic images in a matter of seconds with only a short user prompt. Meanwhile, existing media can be manipulated through photo, video, or audio editing software. In 2024, the Institute for Strategic Dialogue reported that the pro-CCP disinformation network ‘Spamouflage’ was sharing a significant volume of AI-generated original images on X. These images pushed narratives relating to urban decay, police brutality, gun violence, and the fentanyl crisis in the United States, and also, it argued, sought to unsettle Americans before the 2024 election and create a sense of division.

While the specific purposes of these uses of manipulated media may differ slightly, the parallel is clear; in disinformation campaigns, creating synthetic media material to accompany a narrative is an effective communication tool to enhance a narrative’s credibility and authenticity.

Open-source misinformation

Thanks to the Shepperton team, FUSAG had military equipment (albeit inflatable); however, it also needed a commander. For this the Allies chose US General George Patton. Patton was well known to the Nazis, having recently led the successful Allied campaign in North Africa, and as such constituted a believable figurehead for the invasion of Europe. The Allies then sought to generate open-source information which would lead the Nazis to further believe he oversaw an invasion force. Patton, accompanied by photographers and news teams, would make inspections of the inflatable FUSAG and even gave speeches before imaginary infantry units. As this content made its way into media reporting and other open sources, it was picked up by Nazi high command, further fuelling their belief that FUSAG was the true invasion force.

Since at least 2016, a network named ‘Endless Mayfly’, believed to originate in Iran, has sought to create and amplify divisive and inaccurate content online. According to a 2019 report by Citizen Lab, the network’s tactics involve creating inauthentic personas to amplify and promote inaccurate content. These personas then sought to directly engage journalists through social media channels, building relationships in the hope of prompting further publishing of false and misleading information. Citizen Lab suggest that Endless Mayfly’s activity led to incorrect media reporting from legitimate outlets. Endless Mayfly’s tactics recognise the importance of open-source material in fuelling and disseminating disinformation narratives.

Signals intelligence and AI

To further convince the Nazis of FUSAG’s credibility, the Allies brought in a full US Army Signals unit who worked around the clock to send false radio messages, reporting training exercises and mock beach landings through Allied communication channels. These false signals were analysed by the Nazis and gave them an incorrect idea as to the size and location of the invasion force. It was the sheer volume of these signals, with operators working long shifts, as well as their location in South East England, which contributed to the deception. It is at this point that we may be able to glimpse a key role AI will play in the future of disinformation campaigns.

A 2023 study by the Harvard Kennedy School Misinformation Review argues that AI’s capability to increase the quantity of mis/disinformation will not produce a meaningful change in the diffusion and consumption of misinformation material. The authors argue along the lines of supply and demand, claiming that increases in the supply of misinformation will only occur if there is an unmet demand, a demand which they don’t believe exists. Instead, they argue that there is already an over-supply of misinformation which is just not being consumed by online users.

However, there is a scenario in which this conclusion does not apply. The authors follow the argument that online users know in advance what they are looking for; that is, that “what makes misinformation consumers special is not that they have privileged access to misinformation but traits that make them more likely to seek out misinformation”. This does not correlate with another aspect of our online experience on social media platforms; users are regularly confronted with content they do not actively seek out. Dramatically increasing the volume of disinformation content, in a way which AI will undoubtedly allow us to do, will increase the likelihood of an individual coming into contact with a given narrative, subsequently compounding the probability of demand for that narrative spiking, potentially over genuine information. Moreover, as demonstrated by Operation Fortitude South, sometimes the sheer volume of noise pertaining to a given narrative is enough to attract attention and mislead. Therefore, AI’s capacity to dramatically increase the volume of a given disinformation narrative in the future should not be ignored.

The above glance shows that, while the channels through which disinformation spreads might have changed over time, the tactics and techniques used are fundamentally similar. History repeats itself, and reflecting on it may allow us to anticipate our own future threats.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.