Loading Logo

How Social Engineering is Evolving with the use of AI-Driven Tools

September 2024
 by Sky Ojo

How Social Engineering is Evolving with the use of AI-Driven Tools

September 2024
 By Sky Ojo

Social engineering is a tactic used to exploit individuals by deceitful or coercive measures for the purpose of gaining access to personal or sensitive information. Traditional social engineering methods include phishing or smishing. However, with the advent of AI tools and technology, the ability to conduct more sophisticated and less obvious attacks is becoming increasingly easier. Such advancements in AI provide scammers with multiple avenues that they can exploit. The ease of conducting such scams increases the risk to an individual or organisation, evidencing growing vulnerabilities in the online world.

Social engineering techniques play on human interactions, using some form of communication to lure and deceive people into a situation where they inadvertently expose sensitive information. This is often done by allowing the scammers to access secure areas, thus compromising the safety of an individual or an organisation. Bad actors rely on human emotion; they may exploit your trust, posing as your bank, or manipulate your desires, to entice you into following a link. More commonly though, they play on fear, presenting a scenario where you feel at risk, when you are most likely to make decisions in haste without properly assessing the potential risks.

The effectiveness of social engineering lies in its simplicity and ease of execution. Weaknesses can be harvested by simply examining someone’s or an organisation’s digital footprint to extract meaningful information that can be used to deceive an individual, whilst the perpetrator poses as a legitimate entity.

Phishing, smishing and vishing

There are numerous types of social engineering; however, the most observed is phishing, the umbrella term given to a cyber-attack where bad actors use communication in any form to conduct their fraudulent activities. This could be in an email, where the perpetrators mimic legitimate emails to dupe individuals into sharing details such as their banking information or passwords, or can include a link to a fraudulent website or malicious downloads to infiltrate devices. Another common method of phishing sees the bad actor use text message communication to support their activities, the act of which has been coined smishing. Smishing efforts see attacks mimic messages from a bank or a postal delivery service to attempt to deceive their victims into sharing sensitive information. Similarly, a bad actor could attempt to do this via a phone call or voicemail, otherwise known as vishing.

Generative AI is assisting bad actors in making such scams more commonplace, with social engineering activities becoming more efficient and sophisticated. The use of emerging AI technologies allows criminals to enhance their scams. Phishing scams can be automated, using AI to mimic tone and language to write convincing messages. Cloning technology can be used to copycat someone’s voice, producing realistic audio for phone calls or voicemails. Stories of such have ballooned recently, for example, reports of parents who have received phone calls from family members allegedly being held at ransom, when in fact, the audio has been artificially created. In more sophisticated cases, AI can be used to meld numerous techniques to execute a realistic scenario. As was the case in February 2024, when it was reported that an advanced scam saw an employee at a professional services firm transfer HK$200 million (£20 million) to scammers. The victim was initially sceptical having received an invitation to join a video conference. However, upon joining the meeting, which included 15 senior employees at the firm, the employee fell victim to the scam. In reality, the only real person in the meeting was the victim – the 15 additional employees had their identities poached, with AI-generated deepfake technology used to replicate audio and video of each of them.

Data collection and analysis

Generative AI isn’t just used to create content. It can also be used to collect, store and process large amounts of data. This is especially beneficial to bad actors who infiltrate systems to collect data on an organisation. Sifting through copious amounts of data can be a time-consuming task, but AI can easily and efficiently analyse data to extract the information most beneficial to them.  Additionally, scammers can exploit AI to create complex types of malware, potentially capable of evading detection or manipulate it to bypass safeguarding settings.

Recently, in August 2024, media outlet, Wired, reported on the weaknesses in Microsoft’s Copilot AI system. Researcher Michael Bargury found it could be manipulated to gain access to and information from emails, Teams chats and files. It involves using Copilot to automatically generate and send phishing emails that mimic a user’s writing style and language. It does this by using Copilot to see who you are frequently in communication with, and drafting a convincing email copying the tone and language used in your emails, which can then be distributed among your network. Microsoft has since acknowledged the weaknesses and emphasised the need for robust security measures to mitigate opportunities to abuse AI technology.

Minimising the risks

Whilst generative AI tools such as Copilot and ChatGPT have implemented safeguards to mitigate malicious use of their software, with a combination of know-how and trial and error, these safeguards can often be circumvented. Furthermore, AI tools such as FraudGPT and WormGPT have surfaced to support criminal activities. These uncensored ‘dark LLMs’ have been created to enhance social engineering, optimised for creating sophisticated phishing campaigns, malware and scam content.

While traditional methods of social engineering, like phishing scams, remain common, with the use of AI-driven tools, scammers are capable of creating ever more convincing and complex scams. This is enhanced by the ease of producing content such as authentically appearing emails, audio and video alongside malware. These developments highlight the growing vulnerabilities we face as individuals and organisations in tandem with the growing use and rapid progression of AI tools. As AI technology continues to evolve, it has never been more pressing to maintain robust digital health and prioritise and implement necessary security measures to minimise the risks posed by AI-driven social engineering.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.