The same technology promising to transform our lives is also making it easier for scammers to create everything from voice clones and deepfakes to convincing phishing emails. But will the arrival of ChatGPT make it even easier for cybercriminals?
Forget big tech and the runaway success of TikTok, there is a new sheriff in town, and it goes by the name of ChatGPT. In just two months, the OpenAI-developed chatbot achieved 100 million active users and entered the record books by securing the title of the fastest-growing consumer application in history. A feat that previously took TikTok nine months. But what role could ChatGPT play in the future of cybercrime?
Anyone who frequents dark web forums will have come across conversations of users wanting to leverage OpenAI's chatbot to help create stealthier malware. Unsurprisingly, BlackBerry recently published a survey of 1,500 IT decision-makers, which revealed that 51% believe that a successful cyberattack using ChatGPT will occur within the year. Furthermore, 71% also fear that foreign states already use the technology for malicious purposes.
Many business leaders have significant concerns about how advances in AI and machine learning will help hackers create more believable phishing emails. As a result of the findings, 82% of IT decision-makers plan to invest in AI-driven cybersecurity over the next two years. In addition, 95% believe governments must quickly step up and take responsibility by regulating the new wave of advanced technologies. But the pacing problem between technological change and regulation widens yearly.
Is ChatGPT empowering script kiddies to write malware?
It's important to highlight that ChatGPT has robust content policy filters that restrict wannabe cybercriminals from doing bad things. But ChatBots can easily be blindsided by determined users who will always find a way of bypassing filters. For example, Check Point Research and our Cybernews in-house investigation recently revealed that individuals with little or no coding expertise were exploiting ChatGPT to create deployable malware.
Conversations in cybercrime forums are currently dominated by users looking to compose malware and emails for espionage, ransomware attacks, malicious spam, and other nefarious activities. Cunning hackers are already using ChatGPT to create malicious code that evolves with each mutation. CyberArk proved that this is no longer just a theoretical threat but a pressing issue already causing headaches for cybersecurity experts.
As a demonstration of their capabilities, one cybercriminal shared the code for an information stealer they created using ChatGPT. The malware, written in Python, proved capable of locating, copying, and extracting 12 standard file formats, including Office documents, PDFs, and images, from a compromised system.
Generative AI will continue to lower the bar for a new generation of cybercriminals. First, however, it's essential for individuals and organizations to understand the potential dangers and to take steps to protect against them. This can include educating users on identifying and avoiding attacks, implementing proper security measures and protocols, and continuously improving and updating countermeasure technology.
Social engineering at scale
Social engineering attacks on dating sites at scale just got much easier for malicious actors intent on scamming vulnerable individuals. These attackers use tactics such as impersonating attractive individuals, building trust, and manipulating emotions to extract sensitive information, money, or other benefits from their targets. But until now, limited use of the English language made them easy to spot.
ChatGPT allows every user to converse in any tone or language in a way that will appeal to the intended victim. For example, it takes just a few seconds for attackers to create unique romantic poems or songs to win the hearts, minds, and wallets of their victims.
It is crucial for dating site users to be aware of these tactics and to take precautions, such as verifying the identity of people they communicate with and not sharing sensitive information. Dating sites should also implement measures such as background checks, two-factor authentication, and regular security updates to protect their users from these attacks.
The ability to generate professional human-like text will also make it easier to target businesses with bespoke attacks. In the wrong hands, ChatGPT can help anyone craft a convincing phishing email and even write the code for a malware attack. Unfortunately, regardless of your skill level or language, it has become relatively easy for cybercriminals to embed their newly created malicious code into an innocent-looking email attachment.
AI accountability: Who's responsible?
Predicting the effect of ChatGPT and other AI on the future of cybercrime is difficult. But, it's essential to recognize that technology is merely a tool. Some people will use it for good, while others will use it for malicious purposes, but this reflects more on human nature than on machines, chatbots, or AI.
When I asked ChatGPT, who is to blame for the inappropriate use of the platform, it replied, "As an AI language model, I am not capable of having intentions, desires, or emotions, so I cannot be held responsible for any actions. The responsibility for the content generated by me lies solely with the person or organization using me, as they have control over the context in which I am used and the manner in which my outputs are utilized."
Ironically, this shifting of the blame onto someone else is the most human response I could have expected. Ultimately, the impact of AI on the threat landscape will be determined by how it is developed, implemented, and regulated. But who will take responsibility for that is a story for another day.
Your email address will not be published. Required fields are markedmarked