A group of researchers utilized the GPT-3 deep learning language model to create effective and realistic spear phishing email campaigns. In fact, more people clicked on the links in phishing messages generated by the AI than in those written by the researchers themselves.
Recent data from Atlas VPN highlights the continuing popularity of phishing as a means of cybercrime. The data shows that the French financial group Credit Agricole was the most imitated brand in the first half of 2021, with the brand linked with nearly 18,000 unique phishing URLs.
The brand beat tech giants Facebook and Microsoft to the top spot in a sign that financial services firms were the most common target for phishing attacks during 2021, with the sector accounting for nearly 40% of all phishing attacks recorded by the company.
“Imitating well-known and trusted brands in attempts to steal people’s personal information is a common tactic among cybercriminals,” says Ruth Cizynski, a cybersecurity researcher at Atlas VPN. “Due to the rise in digital payments and growing reliance on online banking during the pandemic, financial service brands were particularly popular in phishing attacks.”
Give away signs
As phishing attacks become more commonplace, organizations are doing their best to help people avoid falling foul of them. One of the most common pieces of advice is to watch for poor spelling. Many phishing attacks originate from non-English speaking countries, so a telltale sign that things are not what they should be is the imprecise use of English used in the email.
At the Blackhat USA 2021 conference, a team from Singapore’s Government Technology Agency outlined their work on using AI to produce more realistic phishing messages. The researchers utilized the GPT-3 deep learning language model to make creating effective and realistic spear phishing email campaigns much easier.
Despite the progress made by natural language processing in recent years, it has seldom been worth the effort to use it to generate phishing messages as the simple and largely formulaic kind that can be pumped out en masse has proven to be so effective and are considerably easier and cheaper to produce. The equation is notably different when it comes to spear phishing, however, as specific individuals tend to be targeted, so the construction of the messages tends to be more labor-intensive.
The email generated by the system was tested alongside emails personally crafted by themselves, with 200 unsuspecting guinea pigs receiving the messages, all of which contained links that, while not malicious, did allow the researchers to track clickthrough rates.
The results show that more people actually clicked on the links in the messages generated by the AI system than in the ones written by themselves.
What’s more, the contest was not even close, with the AI winning by a significant margin.
Obviously the creation of such a model requires a good amount of time and expertise, but the operational costs once it has been established are marginal, with hackers simply giving the system a prompt and waiting for the email to be output out the other end. That significantly lowers the costs involved and the barriers of entry to a much wider pool of possible cybercriminals.
Such "as-a-service" models have become commonplace in numerous other industries and are increasingly common in cybersecurity too, as hackers provide access to expertly developed tools, platforms, target lists, and hackers themselves to whomever wants to pay for it.
The use of AI in developing phishing emails is particularly interesting, however, as it allows attackers to move on from the mass mailings that are common today and move towards the more personalized approach seen in spear phishing campaigns on a mass scale. The system was used in conjunction with other AI-as-a-service tools that focused on personality analysis to generate messages that were specifically tailored to the traits and backgrounds of the recipient.
For instance, the researchers were able to produce emails that were based upon the mentality and proclivities of each individual, with the system able to learn from the results it was producing and constantly improve the messages generated.
The researchers explained that the results were more human-seeming than many phishing emails, but also more customized than messages usually generated by human beings, which was reflected in the high click-through rates achieved.
While the results were impressive, the researchers are at pains to keep expectations in check, as not only was their sample quite small but the targets were also pretty homogenous in terms of their location, job profile, and so on. The messages were also generated by people with a degree of inside knowledge of the targets so outsiders would have quite a bit more work to do in order to replicate the results they achieved.
Nonetheless, they are an impressive indication of the progress being made in making phishing attacks much higher quality than they often are today, which has implications not only for spear and whale phishing campaigns, but also the mass market mailings that make up the bulk of phishing attacks today.