The National Institute of Standards and Technology (NIST) will publish guidelines on safer use of artificial intelligence (AI) as reports emerge that new technologies like ChatGPT3 will be used by cybercriminals to write phishing emails and even malware.
NIST released a statement announcing it would showcase the AI Risk Management Framework on January 26. The event can be watched from an embedded link on its website.
“NIST is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI),” it said, describing it as “intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”
Some will feel the announcement comes not a moment too soon, while others may wonder whether it will be of any avail. Check Point recently revealed that there is growing evidence of Russian cybercriminals conspiring on the dark web to illegally access ChatGPT3, the AI-driven text generator app that can write anything from propaganda to low-level malware code.
“In underground hacking forums, hackers are discussing how to circumvent IP addresses, payment cards and phone numbers controls – all of which are needed to gain access to ChatGPT from Russia,” said Check Point, noting that the software program is officially banned in the disgraced superstate.
“It is not extremely difficult to bypass OpenAI’s restricting measures for specific countries to access ChatGPT,” added Check Point. “Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes.”
Why this is scary
In a podcast released around the same time, the cybersecurity company put ChatGPT to the test and found it could write not only phishing emails – which cybercriminals use to lure the unwary into clicking on malicious links – but also the coding to create the malware itself.
Check Point’s uncomfortable conclusion was that once cybercriminals figure out how to bypass access and usage restrictions set on ChatGPT by its designer OpenAI, they will be able to use the software program to swell their ranks.
“I think AI will make coding much more common,” said a Check Point threat intelligence researcher. “It lowers the bar for coding so if you want to write malicious code you can write many examples and you don’t have to be a proficient coder and that means that more people can write malware.”
Check Point even speculates that intelligence agencies on both sides of the geopolitical divide – the GRU of Russia and the NSA in the US – could, in theory, each develop their own forms of the software, allowing them to raise armies of threat actors.
But it also said ChatGPT3 could be used to bolster cybersecurity, helping analysts tackle problems like obfuscation, whereby a threat actor conceals malicious code.
Next week’s official launch of the NIST regulatory framework will feature talks by Under Secretary of Commerce for Technology and NIST Director Laurie Locascio, Deputy Secretary of Commerce Don Graves, and White House Office of Science and Technology Policy Deputy Director Alondra Nelson.
NIST hopes the guidelines will help to make AI technologies more trustworthy and less risky to the human race – whether the rules will have the desired effect remains to be seen.
More from Cybernews:
Subscribe to our newsletter