ChatGPT’s API program, designed to incorporate artificial intelligence (AI) functionality into pre-existing apps and software, comes with a considerable cybersecurity risk, warns analyst Endor Labs.
Its research team found that while more than 900 software packages are using OpenAI’s intelligent software to enhance performance, existing language-learning models (LLMs) correctly identify malware in only 5% of cases, or one instance in 20.
While acknowledging that AI has made impressive advances since ChatGPT went mainstream last November, Endor Labs urges organizations of all sizes to practice the tech equivalent of due diligence when selecting packages.
It cites the combination of AI’s extreme popularity and the lack of historical data around its programs as representing what it calls fertile ground for potential cyberattacks.
Commenting on his team’s findings, Henrik Plate, lead security researcher at Endor Labs, said: “The fact that there’s been such a rapid expansion of new technologies related to artificial intelligence, and that these capabilities are being integrated into so many other applications, is truly remarkable — but it’s equally important to monitor the risks they bring with them.”
He added: “These advances can cause considerable harm if the packages selected introduce malware and other risks to the software supply chain.”
Software and cybersecurity specialists can access more details of Endor’s findings here.
Your email address will not be published. Required fields are markedmarked