At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore’s state technology agency presented a recent experiment in which an AI service generated phishing emails to 200 of their colleagues. The messages contained links that weren’t really malicious, just reported click-through rates back to the researchers. The interesting thing is that most of the people clicked the links in the AI-generated messages than the human-written by a considerable margin.
In the latest study, researchers have found that they could use the deep learning language model GPT-3 along with other AI-as-a-service platforms to significantly increase the barrier to entry for creating large-scale spearphishing campaigns reduce.
This AI Wrote Better Phishing Emails than Humans in a Recent Test
Eugene Lim, a cybersecurity specialist with the Government Technology Agency, said.
“Researchers have indicated that AI requires a certain amount of expertise. It takes millions of dollars to train a really good model. But once you’ve set it to AI-as-a-Service, it costs a few cents and it’s really easy to use – just register, write. You don’t even have to run any code, you just give it a prompt and you get output. This lowers the barrier to entry for a much larger audience and increases the potential targets for spearphishing. Suddenly every single email can be personalized en masse for each recipient. “
The researchers used OpenAI’s GPT-3 platform in association with other AI-as-a-service products that focus on personality analysis to generate phishing emails. Machine learning aims to predict a person’s tendencies and mentality based on behavioural inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that prepared and refined the emails before they were sent. They say the results sounded “strangely human” and that the platforms automatically provided surprising details.
The result was quite impressive, but this should be kept in mind that the test was taken on a limited number of users. On the other hand, the target pool was fairly homogeneous in terms of employment and geographic region. Moreover, both the human-generated messages and those generated by the AI-as-a-Service pipeline were created by Office insiders rather than outside attackers trying to hit the right note remotely.
Tan Kee Hock, a cybersecurity specialist with the Government Technology Agency, said
“There are many variables that need to be considered,”
OpenAI in a statement to WIRED said,
“The abuse of language models is an industry-wide problem that we take very seriously as part of our commitment to the safe and responsible use of AI. We grant access to GPT-3 through our API and review every production use of GPT-3 before it goes live. We impose technical measures such as rate caps to reduce the likelihood and impact of malicious use by API users. Our active monitoring systems and audits are designed to detect potential indications of abuse as early as possible, and we are continuously working to improve the accuracy and effectiveness of our security tools. “
The good news is that the results encouraged researchers to think more deeply about how AI-as-a-Service could play a role in future phishing campaigns.