Getty Images/iStockphoto

Tip

Generative AI is making phishing attacks more dangerous

Cybercriminals are using AI chatbots such as ChatGPT to launch sophisticated business email compromise attacks. Cybersecurity practitioners must fight fire with fire.

As AI's popularity grows and its usability expands, thanks to generative AI's continuous improvement model, it is also becoming more embedded in the threat actor's arsenal.

To mitigate increasingly sophisticated AI phishing attacks, cybersecurity practitioners must both understand how cybercriminals are using the technology and embrace AI and machine learning for defensive purposes.

AI phishing attacks

On the attack side, generative AI increases the effectiveness and impact of a variety of cyberthreats and phishing scams. Consider the following.

General phishing attacks

Generative AI can make traditional phishing attacks -- via emails, direct messages and spurious websites -- more realistic by eliminating spelling errors and grammatical mistakes and adopting convincingly professional writing styles.

Large language models (LLMs) can also absorb real-time information from news outlets, corporate websites and other sources. Incorporating of-the-moment details into phishing emails could both make the messages more believable and generate a sense of urgency that compels targets to act.

Finally, AI chatbots can create and spread business email compromise and other phishing campaigns at a much faster rate than humans ever could on their own, increasing the surface area of such attacks.

Spear phishing

Spear phishing attacks use social engineering to target specific individuals with information gleaned from social media sites, data breaches and other sources. AI-generated spear phishing emails are often very convincing and likely to trick recipients.

At Black Hat USA 2021, for example, Singapore's Government Technology Agency presented the results of an experiment in which the security team sent simulated spear phishing emails to internal users. Some were human-crafted and others were generated by OpenAI's GPT-3 technology. More people clicked the links in the AI-generated phishing emails than in the human-written ones, by a significant margin.

Fast-forward to today, when LLM technology is more widely available and increasingly sophisticated. Generative AI can -- in a matter of seconds -- collect and curate sensitive information about an organization or individual and use it to craft highly targeted and convincing messages, and even deepfake phone calls and videos.

Vishing

Vishing, or voice phishing, uses phone calls, voice messages and voicemails to trick people into sharing sensitive information. Like other types of phishing, vishing attacks typically try to create a sense of urgency, perhaps by referencing a major deadline or a critical customer issue.

In a traditional vishing scam, the cybercriminal collects information on a target and makes a call or leaves a message pretending to be a trusted contact. For example, a massive ransomware attack on MGM Resorts reportedly began when an attacker called the IT service desk and impersonated an MGM employee. The malicious hacker was able to trick the IT team into resetting the employee's password, giving the attackers network access.

Generative AI is changing vishing attacks in the following two ways:

  1. As previously discussed, AI technology can make the research stage more efficient and effective for attackers. An LLM such as GPT-3 can collect information for social engineering purposes from across the web, nearly instantly.
  2. Attackers can also use generative AI to clone the voice of a trusted contact and create deepfake audio. Imagine, for example, an employee receives a voice message from someone who sounds exactly like the CFO, requesting an urgent bank transfer.
Screenshot of a phishing email, in which the attacker impersonates 'The Google Team' and invites the user to confirm account ownership by clicking on a link.
Phishing emails such as this one are likely to become increasingly sophisticated and believable, as generative AI becomes smarter and more accessible to attackers.

How to defend against AI phishing attacks

Generative AI will clearly make life more difficult for cybersecurity practitioners and end users alike. But AI tools can also bolster defenses in the following ways.

How to detect AI phishing attacks

They say it takes one to know one, and, unsurprisingly, AI tools are uniquely suited to detecting AI-powered phishing attempts. For this reason, security leaders should consider deploying generative AI for email security purposes.

That said, CISOs must also keep operational expenses in mind. While using an AI model to monitor all incoming messages could go a long way toward preventing AI phishing attacks, for example, the cost of doing so could still prove prohibitively high.

In the future, however, models will likely become more efficient and cost-effective, as they become increasingly curated and customized -- built on smaller data sets that focus on specific industries, demographics, locations, etc.

End-user training

Generative AI models can make security awareness training much more customized, efficient and effective.

For instance, an AI chatbot could automatically adapt a training curriculum on a user-by-user basis to address each individual's weak spots, based on historical or real-time performance data.

Additionally, the technology could identify the learning modality that best serves each employee -- in-person, audio, interactive, video, etc. -- and present the content accordingly. By maximizing security awareness training's effectiveness at a granular level, generative AI could significantly reduce overall cyber-risk.

Context-based defenses

AI and machine learning tools can quickly collect and process a vast array of threat intelligence to predict and prevent future attacks and detect active threats. For example, AI might analyze historical and ongoing incidents across a variety of organizations, based on the following characteristics:

  • Types of cyber attacks.
  • Geographic regions targeted.
  • Organizational sectors targeted.
  • Departments targeted.
  • Types of employees targeted.

Using this information, generative AI could identify which types of attacks a given organization is most likely to experience and then automatically train security tools accordingly. For instance, AI might flag particular malware signatures for an antivirus engine, update a firewall's blocklist or trigger mandatory secondary or tertiary authentication methods for high-risk access attempts.

Generative AI is undeniably changing our connected world at a dramatic pace. Security leaders must be aware of how malicious hackers are using AI and machine learning technology and fight fire with fire -- using the same technology to strengthen the defense model.

Ashwin Krishnan is a technical writer based in California. He hosts StandOutin90Sec, where he interviews cybersecurity newcomers, employees and executives in short, high-impact conversations.

Dig Deeper on Threats and vulnerabilities

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close