As digital information sharing threatens to outpace traditional crisis communications efforts, large language models (LLMs) are intensifying the issue.
LLMs underpin the response systems of today’s most widely used AI chatbots – including ChatGPT, Gemini, and Perplexity – shaping how these tools answer fundamental questions such as “Who is…?” or “What is…?” about an individual or company. To answer such queries, ChatGPT, Gemini, and Perplexity integrate real-time retrieval systems to reference current web content. If search results or public sources are incomplete, outdated, or unbalanced, the response generated will reflect and reinforce these gaps. In moments of heightened attention or crisis, when online content changes rapidly and query volumes surge, this dynamic accelerates, becoming particularly dangerous.
Filling the void
The absence of authoritative digital content in an online record creates an information vacuum. LLMs, with their fundamental directive to provide responses to user queries, are prone to filling these vacuums with whatever information helps them answer the query at hand. The nature of this information can range from sensationalist news content and historical negative narratives to completely erroneous conflations, contextual errors, and the well-publicised issue of ‘hallucinations’. In this context, when crisis hits a digital profile missing factual, up-to-date information from the source itself, users searching for information about the story via LLMs are presented with either what other people are saying or what chatbots are erratically conflating.
Information vacuums become even more damaging when perceived gaps in the public record are filled with AI-generated falsehoods. The Coldplay concert ‘kiss cam’ scandal in July 2025 offers a recent example, with false AI-generated statements circulated widely online, supposedly on behalf of various parties involved. The executive’s company later confirmed these were false and set the record straight, but not before they had been widely shared and engaged with as though genuine on multiple social media platforms, and not before they had been picked up as article topics by tier one media outlets, cementing them in the online record.
A further example occurred a few months later after political activist Charlie Kirk was killed, when AI-enhanced images of a supposed suspect were disseminated online by civilians and even government officials, distorting public understanding of the issue and complicating law enforcement efforts.
Creating the crisis
Based on these escalations, a scenario is plausible where realistic AI-made statements, images, and logic could be presented in chatbot responses as verified fact. The idea of ‘model collapse’ describes the possible ‘Ouroboros’ effect of AI feeding AI, muddying which chatbot responses can be relied upon and which cannot. We already see the beginnings of this dynamic in the political sphere, where AI-generated content created a party-political crisis rather than merely reacting to it. In October, the UK Member of Parliament (MP) for Mid Norfolk was the subject of an AI-generated deepfake video in which he appeared to criticise his own political party and announce his defection to another. The video had already been widely shared before it was debunked by the Local Democracy Reporting Service (LDRS) – the BBC-funded public service news agency – and reported to the police. However, the reputational risk remains; AI can now convincingly speak for individuals, even when they have not spoken at all.
Compounding these challenges is the erosion of institutional fact-checking infrastructure. Full Fact, one of the UK’s leading fact-checking organisations, recently announced the loss of support from one of its biggest funders: Google. Meta most notably began its reduction of content moderators in January 2025, continuing this strategy in April 2025 by cancelling its content moderation contract with Canadian tech company TELUS, which impacted its moderation centre in Spain. This August, social media platform TikTok announced plans to cut hundreds of content moderation positions at its London site, signalling its move toward exploring the use of LLMs for moderation. As the volume of AI-generated content grows exponentially, the need for rigorous, independent verification becomes more urgent, not less. Without it, the digital record becomes increasingly vulnerable to distortion, and reputational damage becomes harder to contain.
The cost of silence
The proliferation of generative AI has not changed the fundamentals of crisis communications, but it has raised the stakes. Control over your digital footprint is no longer just about search rankings or social media sentiment, it is a more complex challenge: to shape the data that shapes the narrative. In a world where AI chatbots are increasingly the first point of contact, the cost of silence is reputational erosion by algorithmic drift. The classic line that “effective crisis management begins long before a crisis” holds true now more than ever. It starts with engaging with the growing complexity of a digital presence and strengthening credibility and visibility in the online spaces that matter, ensuring that when crisis hits you are not just reacting – you are driving the narrative.
Privacy Policy.
Revoke consent.
© Digitalis Media Ltd. Privacy Policy.
Digitalis
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email webrequests@digitalis.com
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all of the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside of it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers), both for Windows and for MAC users.
Despite our very best efforts to allow anybody to adjust the website to their needs, there may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to webrequests@digitalis.com