Social Engineering Attack In-Depth: AI Offense vs. AI Defense

Hamza Sayah
Social Engineering Attack In-Depth: AI Offense vs. AI Defense

It’s always been challenging for cybersecurity teams to keep up with the evolving threat landscape. But artificial intelligence (AI) has introduced new problems.

Attacks are now faster and easier than ever to execute at scale, with hackers using AI to reduce their overhead and simplify various processes. For example, it can be used as a shortcut to find weaknesses in systems and applications, create phishing and malware attacks, and gather and analyze massive amounts of data.

But it’s not all bad news.

AI can just as effectively be used for defense by blue teams as it can by hackers to launch offensive attacks, as Hamza Sayah demonstrated during a recent talk at the Hacking Lab.

Keep reading to find out how…

How are hackers using AI?

To understand how AI can be used to combat attacks, we first have to examine how it’s being used by bad actors. According to a recent report from the National Cyber Security Centre (NCSC):

  • Over the next two years, hackers will use AI to evolve and enhance existing tactics, techniques and procedures (TPPs), ultimately increasing the volume and heightening the impact of cyber attacks
  • All types of cyber threat actors – state and non-state, skilled and less skilled – are already using AI, to varying degrees
  • AI lowers the barrier for novice cyber criminals, hackers-for-hire, and hacktivists to carry out effective access and information gathering operations. In fact, it’s the least skilled, most opportunistic hackers who will benefit the most.

But which TPPs will be enhanced the most by AI? According to the NCSC, phishing, social engineering, and exfiltration.

This is further confirmed by Microsoft and OpenAI, who recently published research into the ways in which nation-backed groups are using large language models (LLMs) to research targets, improve scripts, and to help build social engineering techniques.

During his talk at the Hacking Lab, Hamza detailed exactly how these adversaries might use AI.

In-Depth: Using LLMs to automate social engineering attacks

Let’s first imagine how a hacker might carry out a social engineering attack without the help of AI.

They’d start by conducting in-depth manual research – via social media, for example – or scraping databases on the Dark Web to identify a group of targets. They’d then draft an email rife with spelling and grammar errors (yes, this is done on purpose…), send it out, and wait for a few to “bite”.

If and when any of the recipients respond, the hacker would then have to manually respond, building trust with the target over a series of emails to – eventually – get them to transfer money, click a malicious link, or share sensitive information.

This last part is especially time-consuming, because the success of the social engineering attack relies on some level of personalization. Personalization, naturally, doesn’t scale…at least it didn’t before LLMs were introduced.

Now, a hacker could:

  1. Combine AI with LinkedIn scraping and enrichment tools to create a database of potential targets, and extract additional information.
  2. Prompt a tool like ChatGPT to draft a phishing email to hook targets. Thanks to the database created in the first step, you can automatically generate personalized emails based on the targets’ name, title, company, name, etc.

ChatGPT (and other LLMs) can also be used to launch multi-lingual campaigns. This enables hackers to increase their reach, without any additional skills or expertise.

NOTE: While ChatGPT is programmed to "avoid engaging in activities that may harm individuals or cause harm to the public", there are shortcuts. A writer from CNET tried it as a test and was both surprised and worried by how well it worked.

3.    Again leverage ChatGPT to automatically generate and send personalized responses to targets. There are dozens of articles out there outlining how to make the most of LLMs’ email reply writing capabilities. It’s easy for hackers to adapt these prompts to serve their nefarious purposes.

Once they’ve identified “qualified” targets, they can execute the final step of the attack. In this case, sending a malicious link.

How can blue teams use AI for defense?

While – yes – AI introduces new risks and vulnerabilities, it also represents an incredible tool to help enhance protection against a range of threats. In fact, organizations need AI to help them detect threats, especially as they continue to increase in sophistication and volume.

Manual and playbook-based solutions just won’t work, especially given ongoing talent shortages and resource constraints. Teams must fight fire with fire.

In the last 12 months, legacy cybersecurity solutions have introduced new, AI-powered features and co-pilots, and a slew of new solutions (like Qevlar AI) have been introduced built around AI from the ground-up.

Here are some of the top use cases, according to Hackernoon:

  1. Advanced threat detection
  2. Autonomous alert investigations
  3. User authentication and access control
  4. Minimizing false positives
  5. Endpoint protection

Importantly, AI doesn’t replace the humans in the loop. It just enhances our capabilities.

In-Depth: Autonomous SOC investigation of social engineering attack

Let’s imagine the social engineering attack we outlined in the previous section was real, and it was picked up by an organization’s SIEM tool. Without AI, a SOC analyst would have to manually investigate the alert, determine whether or not it’s a genuine threat, and identify the best remedial actions.

While, in isolation, this might seem like an easy enough task, we have to remember that on average, SOC teams receive 4,484 alerts every day…

That’s where a tool like Qevlar AI comes in:

  1. Qevlar AI integrates with whatever SIEM, EDR, and CTI tools you’re using, and autonomously pulls and enriches data from your environment and external sources.
  2. Qevlar’s autonomous agents conduct exhaustive investigations of every alert as soon as they’re triggered. Unlike SOARs, Qevlar AI takes a non-deterministic approach and doesn’t need to be parametrized. Actions are selected and run 100% independently.
  3. Detailed reports are generated after every investigation, conclusively stating whether or not the threat is genuine or not. The report includes an incident overview, a breakdown of the factors that were considered malicious and non-malicious, and suggested next steps.

Reports (see below) make it easy for analysts to review and validate end-to-end investigations, and dig deeper into all of the sources and insights that influence the conclusive result that the alert is, in fact, a TRUE POSITIVE.

Here’s a full list of the actions that were taken in this hypothetical investigation. Remember, each action is chosen 100% independently, based on the findings from the previous action:

  1. The email from the hacker, “Patrick”, was scanned
  2. The attachment was unzipped and analyzed
  3. All files were hashed
  4. The reputation of each file was checked against antivirus engines
  5. The Excel file was sandboxed to observe its behavior in a controlled environment
  6. The reputation of the external IP addresses observed in the sandboxing results were checked against antivirus engines
  7. The terminal and process history of the internal IP addresses were analyzed for suspicious activity

Based on the investigation, Qevlar AI also suggests personalized next steps. In this case, analysts were prompted to:

  1. Isolate the internal IP address from the network to prevent any further malicious activity
  2. Remove the malicious email from the targets’ inboxes
  3. Update the antivirus software on all target machines and perform a full system scan
  4. Review and update email filtering rules to improve phishing detection
  5. Deliver targeted phishing training to all affected employees

Learn more about Qevlar AI

Qevlar AI acts as an invaluable extension of your SOC team, leveraging the power of LLMs to process large and variable security data streams to perform autonomous and detailed investigations. Our advanced AI models are trained on proprietary and public data, and are fine-tuned and re-trained for continuous improvement.

Want to see the platform in action? Book a demo now.

Get started with our pilot program. See results immediately

Book a demo call with us
Cross form
Success form
Your request as been successfully sent !
We’ll get back to you as soon as possible
Cross form
Oops! Something went wrong while submitting the form.