Loading Logo

Beyond the feed: when online conversations become real‑world threats

December 2025
 by Sheza Raheel

Beyond the feed: when online conversations become real‑world threats

December 2025
 By Sheza Raheel

In an age where a casual chat can reach millions, online spaces are no longer self-contained. What may begin as a message, thread, or post can quickly evolve into offline consequences, ranging from privacy to personal safety concerns.

The challenge is made harder because the profile of risk is shifting. The risks associated with security, including violence and extremism, are no longer always about a structured ideology, clear group membership, or well‑defined doctrine. Instead, these risks are fragmented, personalised, and even nihilistic forms of violence and mobilisation, where the trigger may be grievances and resentment rather than a traditional ideological manifesto. 

These evolving threats increasingly play out across digital platforms, where rapid communication and viral reach can turn isolated online discussions into coordinated real-world actions. In today’s shifting threat landscape, online platforms, from X and Reddit, where messages can reach thousands instantly, to semi-private spaces, such as Discord, serve as both amplifiers and incubators of increasing violence.

Dangers in the digital playground

Discord, as an example, has grown far beyond its origins as a gamer chat platform, and now hosts millions across public servers, community hubs, and interest-based groups. This has led to the platform’s misuse in recent years, transforming it into a space where discussions about extremism and violence occur with relative ease. Research shows that the platform has repeatedly been used by extremist groups to coordinate protests, share propaganda, and circulate ideological materials.

For instance, in the US and UK, researchers have identified that extremists are increasingly targeting teenagers through livestream gaming platforms such as Discord. Vulnerable youths are funnelled from mainstream social media to sites such as Discord, where unmoderated chats and livestreams create  “digital playgrounds” for extremist activity. In the US, since August 2023, at least three plots involving juveniles communicating on Discord or other gaming platforms were disrupted. More broadly, Western countries disrupted over 20 juvenile-driven plots between January and November 2024. Extremist groups that have been deplatformed, meaning removed from major online platforms, have become more sophisticated, focusing on building rapport rather than overtly preaching ideology.

Following major incidents, Discord expanded its safety efforts in recent years, building out its Trust and Safety team and introducing new tools to detect harmful content, but the platform still faces significant challenges. With millions of private and community servers, much of the activity on Discord takes place in spaces that are not continuously monitored by Discord staff. While automated tools and user reports help enforce rules, much of the day‑to‑day moderation in these servers falls to volunteer moderators, which can make comprehensive oversight challenging.

The risks associated with such platform misuse are intensified by a broader societal trend. In the US, political violence has risen sharply, with more than 520 plots or attacks in the first half of 2025 alone –  resulting in 96 deaths and 329 injuries – a nearly 40% increase from 2024. Many of these incidents appear to have been preceded or reinforced by online coordination, underscoring how digital interactions can translate into real-world harm.

A multi-platform challenge

Discord is not the only platform facing these challenges. As moderation tightens in one space, harmful networks often migrate to others. Telegram is another such example. It is a messaging platform offering encrypted chats, large group chats, and public channels, making it a space for rapid, wide-reaching communication. Telegram’s structure allows communities to form around shared interests, but the combination of encryption, limited oversight, and large networks means content can spread quickly without being seen by moderators. While the platform provides legitimate spaces for discussion, it has also been misused to coordinate harmful activity. In the UK, networks on Telegram have been used to organise real-world events, often leveraging automated accounts and AI-generated content to amplify their reach. Some of these networks have also been linked to foreign influence campaigns, including attempts from abroad to encourage activity. Although Telegram removes channels that breach its terms of service, the platform’s encrypted nature makes proactive monitoring challenging, leaving authorities and communities exposed.

A recent study by researchers at George Washington University found that “online hate thrives and survives on smaller social media platforms”, with coordinated campaigns in the US often originating on sites such as Telegram and Discord before spreading to larger, mainstream networks. Major social platforms such as X can enable the rapid spread of content and amplify messages first shared in more niche corners of the internet to larger audiences, where they can be seen by millions within minutes.

Offline echoes

This interconnected digital landscape underscores a critical point: the divide between online interactions and offline harm has never been more porous. Harmful content can spread rapidly, whether amplified by viral platforms such as X or shared in encrypted spaces such as Telegram and Discord, making it harder than ever to separate digital activity from its offline consequences. From influencing public sentiment to driving mobilisation, what starts as a conversation can quickly escalate into action. In this environment, identifying patterns early, especially before they spill across the digital–physical boundary is crucial.

Building strong monitoring and intelligence capabilities is no longer a reactive measure; it is an essential safeguard. For organisations and high-profile individuals, understanding how narratives move across platforms helps anticipate emerging risks, protect people, and prevent harm before it happens. As online and offline realities continue to converge, proactive monitoring becomes not just a tool for awareness, but a pillar of resilience in the digital age.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.