
Britain's National Cyber Security Centre (NCSC) is warning companies that integrating AI chatbots like OpenAI’s ChatGPT and Google’s Bard poses an increased security risk to business operations.
Research shows that artificial intelligence-driven chatbots can easily be tricked into performing harmful tasks via algorithms that can generate human-sounding interactions, according to the NCSC.
The warning comes just days after both Google and Microsoft’s backed OpenAI announced a plethora of new AI tools – integrated with each of their respective large language models (LLM) – and geared directly for large enterprise solutions.
The NCSC said that part of the issue comes from the fact that the technology is still so new, exacerbating the “risks of working in a very rapidly changing and evolving market.”
“The global tech community still doesn‘t yet fully understand LLMs capabilities, weaknesses, and (crucially) vulnerabilities,” it said.
“Whilst there are several LLM APIs already on the market, you could say our understanding of LLMs is still ‘in beta', albeit with a lot of ongoing global research helping to fill in the gaps,” the NCSC explained.
As companies continue to plug more AI-powered elements into their business processes, the risk of vulnerabilities facing these organizations will also increase.
Dangers of AI subversion
Besides possible threats from AI-powered tools performing tasks such as internet searches, customer service work, and sales calls, the researchers found they could repeatedly subvert the chatbots by feeding them rogue commands.
Additionally, the academics found they were able to continuously fool the chatbots into circumventing their own built-in guardrails.
For example, the NTSC said an AI-powered chatbot deployed by a bank could be tricked into making an unauthorized transaction using certain structured queries input by hackers – who have already embraced the technology for nefarious gain.
Besides SQL injection, LLMs have already been proven to successfully carry out a prompt injection attack on account holders whose banks use an LLM assistant to answer questions or give instructions about their finances.
In that instance, an attacker might be able to send a user a transaction request, with the transaction reference hiding a prompt injection attack on the LLM.
"When the user asks the chatbot “Am I spending more this month?” the LLM analyzes transactions, encounters the malicious transaction, allowing the bad actor to reprogram it into sending the user’s money to the attacker’s account," the NCSC explained.
Researchers and vendors also found that an LLM inherently cannot distinguish between an instruction and data provided to help complete the instruction.
In one example, the prompt used to create an organization's LLM-powered chatbot (with appropriate coaxing from a hostile user) was subverted to cause the chatbot to state upsetting or embarrassing things, which then quickly appeared on social media, the NCSC said.
Test, test, test
A Microsoft survey from early this year showed that almost 90% of employees are hungry for better digital tools to automate tasks, but experts question at what risk to their employers.
"Instead of jumping into bed with the latest AI trends, senior executives should think again," said Oseloka Obiora, CTO at RiverSafe, a London-based cybersecurity firm.
"Assess the benefits and risks as well as implement the necessary cyber protection to ensure the organization is safe from harm,” he said.
The race to integrate AI into business practices will have "disastrous consequences" if business leaders fail to introduce the necessary checks, Obiora said.
The NCSC said that companies building business services that use LLMs need to be careful “in the same way they would be if they were using a [software] product or code library that was in beta."
"They might not let that product be involved in making transactions on the customer's behalf, and hopefully wouldn't fully trust it. Similar caution should apply to LLMs," the Centre said.
The Centre suggests testing the LLM-based applications using different techniques, such as social engineering, to convince models to disregard their instructions or find gaps in instructions, understanding there are no surefire mitigations, and then basing the business risk acceptance score on that.
No brakes for AI and big business
The warning coincides with OpenAI release of its new ChatGPT-4 driven enterprise business platform on Monday.
The AI company stated the product will help businesses to “craft clearer communications, accelerate coding tasks, rapidly explore answers to complex business questions, assist with creative work, and much more.”
Not to be left out, Google also introduced 20 new AI tools for an enterprise-oriented platform at a Next Gen event the following day and plans to create even more products geared towards small and medium-sized businesses.
The tech giant also announced the December release of its most capable AI chatbot yet, one five times more powerful than the rival 4th generation GPT.
A recent Reuters/Ipsos poll found that many corporate employees already use tools like ChatGPT to help with basic tasks, such as drafting emails, summarizing documents, and doing preliminary research.
Research shows workers are already inputting data into GenAI tools an average of 36 times per day – but oftentimes that data is considered sensitive, putting the company at security risk.
More from Cybernews:
Credentials of NASA, Tesla, DoJ, Verizon, and 2K others leaked by workplace safety organization
Kyrgyzstan bans TikTok to protect children’s mental health
Meta uses your data to train its AI. Can you opt-out?
Musk’s X faces thousands of arbitration cases, chooses to stall
Fund managers avoided Nvidia shares, missed stunning 230% rally
Subscribe to our newsletter
Your email address will not be published. Required fields are marked