Loading Logo

Crisis without comment: how AI fills the silence

December 2025
 by Rudi Moghaddam

Crisis without comment: how AI fills the silence

December 2025
 By Rudi Moghaddam

As digital information sharing threatens to outpace traditional crisis communications efforts, large language models (LLMs) are intensifying the issue.

LLMs underpin the response systems of today’s most widely used AI chatbots – including ChatGPT, Gemini, and Perplexity – shaping how these tools answer fundamental questions such as “Who is…?” or “What is…?” about an individual or company. To answer such queries, ChatGPT, Gemini, and Perplexity integrate real-time retrieval systems to reference current web content. If search results or public sources are incomplete, outdated, or unbalanced, the response generated will reflect and reinforce these gaps. In moments of heightened attention or crisis, when online content changes rapidly and query volumes surge, this dynamic accelerates, becoming particularly dangerous.

Filling the void

The absence of authoritative digital content in an online record creates an information vacuum. LLMs, with their fundamental directive to provide responses to user queries, are prone to filling these vacuums with whatever information helps them answer the query at hand. The nature of this information can range from sensationalist news content and historical negative narratives to completely erroneous conflations, contextual errors, and the well-publicised issue of ‘hallucinations’. In this context, when crisis hits a digital profile missing factual, up-to-date information from the source itself, users searching for information about the story via LLMs are presented with either what other people are saying or what chatbots are erratically conflating.

Information vacuums become even more damaging when perceived gaps in the public record are filled with AI-generated falsehoods. The Coldplay concert ‘kiss cam’ scandal in July 2025 offers a recent example, with false AI-generated statements circulated widely online, supposedly on behalf of various parties involved. The executive’s company later confirmed these were false and set the record straight, but not before they had been widely shared and engaged with as though genuine on multiple social media platforms, and not before they had been picked up as article topics by tier one media outlets, cementing them in the online record.

A further example occurred a few months later after political activist Charlie Kirk was killed, when AI-enhanced images of a supposed suspect were disseminated online by civilians and even government officials, distorting public understanding of the issue and complicating law enforcement efforts.

Creating the crisis

Based on these escalations, a scenario is plausible where realistic AI-made statements, images, and logic could be presented in chatbot responses as verified fact. The idea of ‘model collapse’ describes the possible ‘Ouroboros’ effect of AI feeding AI, muddying which chatbot responses can be relied upon and which cannot. We already see the beginnings of this dynamic in the political sphere, where AI-generated content created a party-political crisis rather than merely reacting to it. In October, the UK Member of Parliament (MP) for Mid Norfolk was the subject of an AI-generated deepfake video in which he appeared to criticise his own political party and announce his defection to another. The video had already been widely shared before it was debunked by the Local Democracy Reporting Service (LDRS) – the BBC-funded public service news agency – and reported to the police. However, the reputational risk remains; AI can now convincingly speak for individuals, even when they have not spoken at all.

Compounding these challenges is the erosion of institutional fact-checking infrastructure. Full Fact, one of the UK’s leading fact-checking organisations, recently announced the loss of support from one of its biggest funders: Google. Meta most notably began its reduction of content moderators in January 2025, continuing this strategy in April 2025 by cancelling its content moderation contract with Canadian tech company TELUS, which impacted its moderation centre in Spain. This August, social media platform TikTok announced plans to cut hundreds of content moderation positions at its London site, signalling its move toward exploring the use of LLMs for moderation. As the volume of AI-generated content grows exponentially, the need for rigorous, independent verification becomes more urgent, not less. Without it, the digital record becomes increasingly vulnerable to distortion, and reputational damage becomes harder to contain.

The cost of silence

The proliferation of generative AI has not changed the fundamentals of crisis communications, but it has raised the stakes. Control over your digital footprint is no longer just about search rankings or social media sentiment, it is a more complex challenge: to shape the data that shapes the narrative. In a world where AI chatbots are increasingly the first point of contact, the cost of silence is reputational erosion by algorithmic drift. The classic line that “effective crisis management begins long before a crisis” holds true now more than ever. It starts with engaging with the growing complexity of a digital presence and strengthening credibility and visibility in the online spaces that matter, ensuring that when crisis hits you are not just reacting – you are driving the narrative.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.