© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Threat actors can use ChatGPT to create deployable malware

New research shows hackers are exploiting ChatGPT to write usable malware and sharing their results on the dark web.

The latest report, from cybersecurity software retailer Check Point, backs recent findings from our own Cybernews in-house investigation on how bad actors are taking advantage of the newly released AI-based chatbot to help them find ways to exploit security vulnerabilities across the web.

The Check Point research profiled three distinct cases where less experienced cybercriminals would be able to easily recreate workable malware strains capable of infiltrating a network by following the specific instructions provided to them by ChatGPT.

These malicious replicas can deploy malware to phish a system for user credentials, steal files and send them to an offsite server, encrypt sensitive data and even encrypt an entire network for ransom.

In some of the cases, more technologically advanced hackers have posted their ChatGPT query results on several underground community forums that have popped up on the Dark Web since the New Year. Researchers believe it is only a matter of time before these malware strains are deployed in the wild, in real-time, if some haven't been already.

The report revealed the open source AI has also provided hackers with instructions on how to create a Dark Web marketplace for conducting typical illegal cyber activities, such as trading and selling stolen credit card account numbers and other fraudulent schemes, complete with API cryptocurrency payment abilities.

Earlier this week, the Cybernews research team discovered that ChatGPT would provide step-by-step instructions on various ways to successfully hack a website upon their request. The ethically run experiment was performed on the virtual training platform Hack the Box. Using the AI-generated instructions, it took the team only 45 minutes to accomplish the hack.

ChatGPI, or Generative Pre-trained Transformer, was launched November 2022 by artificial intelligence research and deployment company Open AI. Its release was followed by a frenzy of social media coverage and followers. Over one million users have signed up to try out the AI chatbot to date.

According to the developer's website, the ChatGPT model is trained to reject inappropriate requests. Yet both the Cybernews and Check Point research teams had no problems obtaining the potent information.

When asking ChatGPT directly about its own policy on the matter, the bot provided a statement claiming, although “threat actors may use artificial intelligence and machine learning to carry out their malicious activities…Open AI is not responsible for any abuse of its technology by third parties.”

Open AI says its mission “is to ensure that artificial general intelligence benefits all of humanity.” The company is expected to pocket $1B in revenue by 2024.

More from Cybernews:

Andy Greenberg untangles complicated technology of crypto – book review

Facebook users targeted in copyright infringement scam

Air France-KLM claims cyberattack stopped in time – experts aren’t convinced

Gotta catch ‘em all: cybercriminals target victims with fake Pokémon game

LastPass hack aftermath: can we trust password managers?

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked