When you look at the common criminal tactics employed by cybercrime gangs, they will probably be familiar to you. Common methods such as social engineering and malware are mainstays in cybersecurity. This might lead organizations to the misunderstanding that generative AI hasn’t changed much for attackers. However, this couldn’t be further from the truth.

Generative AI is fundamentally changing how cybercrime gangs operate by amplifying their ability to scale and target attacks. Generative AI is becoming a powerful tool for cybercrime gangs in their quest for financial gain. 

In this blog post, we’ll look at how common cybercrime tactics are being up-leveled due to AI and examine why these attacks are experiencing an increase in both quantity and quality. 

How generative AI impacts common cybercrime tactics

Let’s look at four common cybercrime tactics and the ways that generative AI makes them more efficient, scalable, and effective. 

Reconnaissance

Gathering information on a target has become easier than ever before. With commercially available models trained on vast amounts of internet data, finding relevant information about a target has become incredibly efficient and scalable. Facial recognition, data scraping, and parsing of gathered data to find exploitable weaknesses can all be used to target people or organizations alike.

For example, tools like OpenAI’s GPT-based models or Google’s Bard can produce and then be combined with web scraping frameworks like Beautiful Soup or Selenium to synthesize or summarize large datasets on targets. LinkedIn profiles, corporate press releases, and social media activity can all be useful in targeting. Understanding how a business operates and understanding unusual patterns of life are only a prompt away. Cybercriminals have reportedly leveraged such technologies to streamline OSINT collection and processing. 

Social engineering

By combining detailed reconnaissance with AI-generated content, even smaller groups can now mount large, highly personalized campaigns. Campaigns like these would have been difficult or resource-intensive in the past, but generative AI now makes these approachable and scalable. 

In addition to the increased efficiency of social engineering attacks, generative AI has also brought sophistication to these social engineering tactics. This includes the use of deepfakes in extortion and blackmail attempts, where realistic AI-generated images or videos are used to manipulate or coerce victims. For instance, many open-source projects on GitHub now provide the ability to produce live deepfakes, while ElevenLabs and other tools can clone a target’s voice with minimal input data. 

When combined with the social engineering and spoofing already observed as common by Europol in their 2023 IOCTA (Internet Organized Crime Threat Assessment), AI-enabled impersonation schemes will take this to a new level. Already this year there have been numerous news articles of fraud within business and against civilians, with a significant rise in impersonation schemes that have tricked employees into transferring funds or into disclosing sensitive information. This was identified as such a problem that the FTC opened a challenge to assess if it was possible to detect or monitor voice cloning. 

An emerging concern is the increase of more believable pretexts based on information from LLMs. Construction of phishing chatbots has also been observed, where email responses are dynamic, context aware, and automated. Traditional red flags, such as poor grammar or lack of personalization, are no longer reliable indicators of fraud. The Anti-Phishing Working Group (APWG) has observed a significant rise in phishing attacks, with nearly five million incidents recorded in 2023, marking it as the worst year for phishing on record. While the APWG reports do not explicitly attribute the role of AI in enhancing the sophistication and frequency of such attacks, academic papers have been released over the past year describing such correlations.

Malware

Generative AI, particularly Large Language Models (LLMs), have introduced powerful tools accessible to both seasoned cybercriminals and amateurs. With minimal computer science knowledge, attackers can generate code that previously would have been beyond their capability or required significant investment. For those with technical backgrounds, commercial AI has only increased the speed and accessibility of code development and deployment, enhancing the capabilities of both novice and expert criminals.

For example, in 2023, researchers from CyberArk demonstrated how generative AI could be used to create polymorphic malware, code that dynamically changes its structure to evade detection. Additionally, AI-assisted malware obfuscation, where code is rewritten or disguised to avoid detection, has been introduced into the MITRE ATT&CK framework, marking its usage as a common technique used by attackers.

Moreover, researchers have observed generative AI being used in malware-as-a-service platforms. These platforms provide pre-built malware templates that less skilled actors can customize, effectively lowering the barrier to entry for cybercrime. 

Exploits

While amateurs may not be able to discover new vulnerabilities solely using generative AI, those with existing knowledge can use it to accelerate exploit development. AI doesn’t replace technical expertise (for now), but acts as a force multiplier for experienced adversaries, enabling faster and more efficient development of exploits.

For example, AI can assist in reverse engineering software by analyzing disassembled code, identifying decompiled functions, and automating tasks. RevEng.AI is one company who has built a plugin for Ghidra to assist in the analysis and repetitive tasks related to reversing. However, as previously stated, this still requires in-depth knowledge of the reverse engineering process and just increases the efficiency of people with preexisting skills. 

Once a vulnerability has been identified, whether it be by fuzzing, reverse engineering, or other methods, exploiting it requires a distinct set of skills. The exploitation process varies significantly depending on the nature of the vulnerability (e.g., web based or binary). But unless the attacker already has an awareness of Heap Feng Shui, ROP, how to bypass ASLR, and modern binary protections, it is unlikely the average cybercriminal could turn a vulnerability into a full chain exploit with the current generative AI tools that are available. 

 

An increase in attack quality and quantity

Generative AI is causing an increase in attack quantity and quality. One example of this is through phishing attacks. Phishing attacks have diversified, with variants like vishing (voice phishing), smishing (SMS phishing), video call phishing, and even postal phishing becoming more common as email filters and corporate defenses improve. We’re also seeing phishing attacks combined with other new technologies, such as QR code phishing (aka “quishing”). Generative AI allows attackers to create highly tailored phishing pretexts based on extensive reconnaissance, making these attacks much more credible and personalized.

With generative AI making it easier to create targeted phishing content, mass phishing attacks are less desirable in the criminal underworld. Attackers are turning to spear phishing and other highly targeted approaches that yield higher success rates. Using phishing for coercion, fraud, stealing credentials, sessions, or implanting malware are seen all too often. Additionally, methods like vishing and video call-based phishing have surged, further diversifying the tactics available to cybercriminals. Cybercriminals choose the path of least resistance, so less sophisticated attacks, like vishing, are favored if successful.

We’re also seeing new kinds of attacks and new approaches to cybercrime. While traditional attacks are becoming more sophisticated, new attack vectors are also emerging. With open source and deepfake-based technologies, we’re seeing tactics such as voice cloning, video call impersonations, facial deepfakes, and extortion via AI-generated content. These new approaches pose unique threats, creating additional challenges for people and enterprises and necessitating new defensive approaches. 

 

Defending against generative AI attacks

Defending against these sophisticated attacks is increasingly challenging for enterprises. The need for a layered defensive approach is critical, encompassing multiple controls and verification methods across technological, procedural, and human factors. However, verification of personhood has emerged as a major issue, particularly with AI’s ability to impersonate voices, faces, and communication styles convincingly.

At a minimum, enterprises should adopt the NIST framework of five functions: identify, protect, detect, respond, and recover. 

A layered defensive strategy is essential and helps address the constant threat posed by all attacks, reinforcing both preventive and responsive measures. Incorporating crowdsourced security solutions offers an additional game-changing advantage in staying ahead of sophisticated threats. By tapping into a hand-picked network of top-tier security professionals, ranging from experts with experience in the world’s largest consultancies and agencies to highly skilled freelancers, organizations gain access to diverse, battle-tested expertise across industries, managed and coordinated by industry experts from within Bugcrowd. This means exposure to the brightest minds in security, ensuring vulnerabilities and possible attack chains are uncovered faster and defenses can be improved.