Home / White Paper / The Role of AI in Phishing Attacks and Email Security

The Role of AI in Phishing Attacks and Email Security

The Role of AI in Phishing Attacks and Email Security

15

January
Business IT Support

AI has certainly changed the world and is now a cornerstone of most industries. From streamlining workflows and analysing massive data sets to innovations in healthcare, finance, and customer service, AI plays a crucial role in technological progress. But with that comes a darker reality: the same AI systems designed to help businesses and improve lives are being used by cybercriminals. Specifically phishing attacks – already the most common form of cybercrime – have become even more sophisticated and effective with the addition of AI.

Phishing has evolved and AI is a big part of that. It’s a powerful tool to strengthen email defences and it’s being used by cybercriminals to create more convincing and sophisticated attacks.

How Cybercriminals Use AI in Phishing Attacks

Phishing emails used to be easy to spot. They were often riddled with poor grammar, awkward sentence structure and other obvious giveaways. Today, with the rise of generative AI tools such as large language models (LLMs), cybercriminals can create polished and convincing emails that closely mimic legitimate communications.

Using AI, attackers can use the same language, tone and formatting found in legitimate emails to create phishing emails that are almost indistinguishable from the real thing. For example, generative AI can create emails with precise wording, accurate logos and personal details, making them even more believable. This means that even the least technically savvy attackers can launch sophisticated campaigns.

AI is also being used in spear phishing, where attacks are tailored to specific individuals or organisations. Cybercriminals are using AI to mine data from publicly available sources such as social media and professional profiles to personalise their attacks. This targeted approach makes the emails more compelling and increases the likelihood that the recipient will fall victim to the scam.

Statistics

A recent survey by Statista in 2023 of information technology and cyber security professionals in global organisations found that 80% of companies are concerned that AI could be used in cyberattacks. Furthermore, approximately 70% of respondents believe that AI-driven attacks are not just a possibility but an inevitability in the near future.

As large language models that can generate human-like content become more prevalent, these concerns are becoming more pressing. The increasing sophistication of AI technology means that phishing emails that were once easy to spot because of their obvious mistakes are becoming more difficult to identify. This is a worrying future for organisations, who will need to be prepared to defend against highly convincing, AI-powered attacks that are increasingly difficult to distinguish from genuine communication.

According to the latest statistics, 40% of all phishing emails targeting businesses are generated using AI, enabling attackers to craft messages that are polished, convincing, and tailored to their victims. Surprisingly, 60% of recipients fall victim to AI-generated phishing emails—matching the success rate of traditional phishing attempts. This statistic highlights a critical challenge: while AI enhances the sophistication of phishing campaigns, human vulnerability remains a constant. Businesses must prioritise robust email security measures and employee awareness training to combat these increasingly intelligent threats effectively.

Example

Let’s take a closer look at the email below, which showcases the increasing sophistication of phishing attempts—likely crafted with the help of AI. At first glance, it appears professional, leveraging branding, a well-known sender domain, and formal language to manipulate the recipient into taking action. However, as we dissect it further, we’ll uncover subtle yet crucial red flags that expose its true intent.

This phishing email appears to be quite sophisticated at first glance, but there are noticeable red flags upon closer inspection.

Strengths of the Email:

  • Professional Formatting: The email uses Facebook’s logo and a formal structure, which lends it an air of legitimacy.
  • Regulatory Language: References to “UK regulations” and “guidelines governing content and advertising practices” add credibility.
  • Call-to-Action: The “Submit an Appeal” button provides a clear path for action, which could tempt recipients to click without caution.

Red Flags:

  • Sender Address: The sender’s email address is [email protected], which does not match an official Facebook domain.
  • Urgent Tone: The subject line and content create urgency by stating “Take Action to Avoid Permanent Deactivation,” pushing the recipient to act without thinking critically.
  • Vague Details: The email doesn’t specify the exact page or provide clear, personalised information—something a legitimate organisation would include.
  • Threatening Deadline: The January 10, 2024, deadline is likely designed to rush recipients, preventing them from investigating the claim.
  • Suspicious Link: The ‘Submit an Appeal’ button likely leads to a malicious phishing website. Hovering over it would reveal a mismatched or non-Facebook URL. If hovering does not work, you can right-click, copy the link, and paste it into a safe location to reveal the URL, which is likely to be a completely different domain from Facebook or Salesforce.
  • Unnecessary Details: The footer includes a US address (San Francisco), which feels disconnected from the UK regulations mentioned earlier.

AI’s Role in Advanced Attack Techniques

AI is also being used to bypass traditional security measures. Attackers are leveraging phishing kits and tools enhanced with AI to evade secure email gateways (SEGs) and other detection systems. These kits often incorporate features such as:

  • Anti-bot protection, which ensures that only human users interact with fake phishing pages, making detection by automated tools more difficult.
  • Real-time adaptability, where the phishing site dynamically changes based on the user’s actions, making it harder for victims to recognise the deception.
  • Multi-factor authentication (MFA) fatigue attacks, which exploit users’ reliance on MFA by bombarding them with repeated authentication requests until they relent, granting the attacker access.

These AI-powered methods make phishing campaigns faster, larger in scale, and more effective, leaving traditional defences struggling to keep up.

AI as a Defence Tool in Email Security

While AI poses significant challenges, it also plays a vital role in bolstering email security. Businesses are increasingly adopting AI-powered solutions to counteract the sophistication of modern phishing attacks. AI excels in areas such as real-time threat detection, behavioural analysis, and anomaly identification.

AI-driven email security tools can:

  • Detect anomalies in email content by comparing them to typical communication patterns. For instance, if an email appears to come from a trusted sender but exhibits subtle deviations in language or metadata, AI can flag it as suspicious.
  • Analyse user behaviour to identify unusual activity, such as a login attempt from an unfamiliar location or a sudden spike in outgoing emails, which may indicate a compromised account.
  • Quarantine phishing emails before they reach the inbox, reducing the risk of human error.

These AI systems are particularly effective at identifying zero-day threats—new and emerging attacks that traditional rule-based systems may miss. By continuously learning and adapting to new attack patterns, AI strengthens organisations’ ability to stay ahead of cybercriminals.

Balancing Risks and Opportunities

The dual role of AI in email security means we need to be balanced. While AI can help attackers create more sophisticated phishing attacks, it also gives defenders more advanced tools to detect and mitigate those threats. Businesses need to weigh up the risks and opportunities of AI and tailor their defences to their business.

Key to this is layered security. No one tool can fix everything, so businesses should combine AI-powered email security tools with traditional defences like multi-factor authentication, encryption and regular software updates. And don’t forget staff training so employees can spot phishing and respond correctly.

Join the Fight Against Phishing: Free Webinar for SMBs and Beyond

At Labyrinth Technology, we place cybersecurity and best practices at the heart of everything we do—not just for our clients, but for businesses of all sizes. We believe that informed decisions start with knowledge, which is why we’re hosting a free webinar on 28 January 2025 at 10:30 AM.

This webinar will explore phishing trends, the growing role of AI in cyberattacks, and practical strategies to strengthen your organisation’s defences. It is designed to equip businesses with the insights needed to stay secure in 2025. As a bonus, attendees will receive a FREE email security health check.

Don’t miss this opportunity—register today!

Szilvia Gagyi
About the author

Empowering London Businesses with Efficient IT Solutions to Save Time and Stay Ahead of the Competition.

Contact Info

Free Consultation