Loading Logo

AI: a cybersecurity paradox

December 2025
 by Christopher Butler

AI: a cybersecurity paradox

December 2025
 By Christopher Butler

2025 saw multiple British institutions from M&S to Harrods to Legal Aid facing significant and disruptive cyberattacks, not only requiring costly fixes, but also dominating news cycles for extensive time periods and affecting trust. In its Annual Review, the UK’s National Cyber Security Centre reported a 50% increase in “highly significant” cyberattacks this year, with CEO Richard Horne offering a “wake-up call” to businesses of all sizes and in all sectors: “every organisation must understand their exposure, build their defences, and have a plan for how they would continue to operate without their IT (and rebuild that IT at pace) were an attack to get through.”

For those of us outside the cybersecurity industry, awareness of our own IT approach is more essential than ever to safeguard our digital footprints before, during, and after potential crises.

As AI continues to reshape how most industries operate, cybersecurity is no exception and is also evolving at an unprecedented pace. The cybersecurity landscape is changing for both defenders and attackers, transforming how businesses safeguard themselves and how attacks are played out. Conventional rule-based security tools are quickly becoming obsolete. Manual processes can’t match AI’s speed and efficiency, leaving organisations facing a far graver threat than we have seen before. At the same time, in the State of AI Cybersecurity 2025 report, published by AI cybersecurity specialist Darktrace, 95% of cybersecurity professionals surveyed believe AI can improve the speed and efficiency of cyber defence. The very AI fuelling the increase in advanced cyber threats is also our best hope and greatest weapon in defending against attackers. This is the cybersecurity paradox of the AI revolution.

The guardian

If one looks from the side of the protector, AI replaces traditional policies – which rely on manually defined rules to respond to threats – by amplifying defenders with automated machine learning investigations that constantly look out for hidden anomalies.

These machine learning processes are helping systems to detect trends in data and adapt to new threats faster and more accurately than ever before. By constantly probing data across endpoints, servers, network traffic, and broader infrastructure, machine learning models can spot early indicators of intrusion long before a threat manifests.

Traditional processes look for similar anomalies, such as unusual login attempts, unexpected privilege escalation where a user or attacker gains unintended access through vulnerabilities or badly configured systems, abnormal traffic spikes, and data exfiltration patterns that report when there is an unauthorised transfer or theft of data. AI does this while continuously monitoring for deviations in a much more efficient and timely fashion. It can detect unknown threats without prior labelling, making it highly effective against zero-day threats (flaws that hackers exploit before a fix exists).

Machine learning models can also predict likely attack sequences. By enabling earlier threat detections, AI reduces potential damage, data loss, and downtime during attacks, thereby minimising the overall breach impact.

The intruder

On the other side of this paradox, you have the intruders. The same aspects of AI that defenders are benefiting from are being weaponised by attackers to a degree never before seen in the digital age. It’s not just breaching systems where capabilities have improved. The days of easily spotting phishing emails by grammar errors and poorly edited logos are coming to an end. Attackers are now able to create highly personalised phishing and social engineering campaigns on a larger scale using large language models to create relevant, fluent emails that slip past email security more easily than traditional methods.

To bypass human scepticism, intruders are also using generative models to create convincing audio and video deepfakes. This method has been around for years, but with most people having an online presence, in conjunction with new methods enabled by AI, deepfakes are becoming increasingly realistic and are being used for fraud, political manipulation, extortion, and impersonation, to name a few.

Automated exploitation discovery is also being accelerated using AI. Manual vulnerability research used to be a tedious, time-consuming challenge even for the most tech-savvy hackers. Generative coding techniques lower this barrier to entry, making malware far easier to develop. Machine learning methods, but more so the increased accessibility of these methods, are opening the door to those who rely less on their technical abilities, in turn presenting opportunities to a larger cohort of potential attackers.

Modern cybersecurity practices are up against an onslaught of faster attack cycles, more convincing deception, and an influx of attackers capable of using advanced technologies.

The arms race

Much like the pre-AI arms race, defenders and attackers are still participating in a continuous feedback loop where one gets a step ahead and the other implements countermeasures. The main difference now is the speed and automation with which this is done. Manual monitoring remains critical, but on its own, it is no longer sufficient. Organisations can’t stand by and try defending against modern methods without adjusting their own, as doing so will soon be much like trying to quench a forest fire with a garden hose.

Businesses are at a disadvantage in this battle when it comes to ethical and regulatory considerations. Attackers operate without restrictions, but cyber teams must ensure that AI systems aren’t breaking privacy and data protection policies. Companies that handle sensitive client information also have the challenge of making sure the correct due diligence is carried out when interacting with AI models and other third parties that provide these solutions. Attackers don’t have to worry about the likes of GDPR, whereas defenders must balance AI usage with responsible governance, transparency, and privacy.

Can we defend without AI?

Human expertise is, for now, irreplaceable. This, paired with automated models and a resilient design, can put organisations in a good position and is a solid foundation which can be built upon. Traditional tools and processes remain essential, but relying on this alone without integrating robust AI tools leaves gaps in defence – gaps which will only become more significant as time goes on.

As AI continues to evolve, it is important to ethically and intelligently plan how it best embeds within your organisation. AI-based tools and processes will define the next era of cybersecurity resilience. Those protectors who embrace AI have the opportunity to implement a stronger, faster, more intelligent line of defence than ever before.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.