AutoGPT explained: is it really risk free


A new AI tool is taking the world by storm, with some claiming it can do their job for them. But how capable is AutoGPT – and what dangers may it hold?

OpenAI's ChatGPT and its successor GPT-4 have become the world's new favorite toys, used for everything from cheating on essays to creating kitsch art.

They are also capable of more work-based tasks, such as answering customer enquiries, summarizing reports and even writing and debugging code.

However, there are a number of significant weaknesses, from limitations in the amount of training data and failures in understanding and logic to a tendency to 'hallucinate' incorrect information.

Now, though, there's a new AI on the block: AutoGPT.

Created by 'Significant Gravitas' or Toran Bruce Richards, AutoGPT is an open-source Python application based on GPT-4 that can self-prompt — in other words, if the user states an end goal, the system can work out the steps needed to get there and carry them out.

It comes with internet access, long- and short-term memory management, GPT-4 instances for text generation, access to popular websites and platforms and file storage and summarization with GPT-3.5.

Once AutoGPT has been provided with an OpenAI API key and a Google API key, it works through an iterative process, taking the output and putting it back into the AI model to make improvements to the results. Users have the option of checking and approving each step before it moves on to the next.

Its creator makes it clear that it's an experimental application designed simply to showcase the abilities of the GPT-4 language model.

However, users have been scurrying to create practical applications — with some interesting results.

AutoGPT applications

The Do Anything Machine, for example, is a 'to-do list that does itself', according to creator Garrett Scott McCurrach, CEO at Pipedream Labs. If given access to a user's applications, it can work through a task list and track and prioritize tasks.

"Every time you add a task, a GPT-4 agent is spawned to complete it," says McCurrach. "It already has the context it needs on you and your company, and has access to your apps."

Meanwhile, Isabella is a personal investment analyst created by Twitter user Moe, and designed to autonomously gather and analyze market data and save the results, while outsourcing tasks to other AI agents.

And James Baker has created a research agent that, with five searches and 15 web browsers, can prepare a five-topic podcast on recent news with accurate references.

AutoGPT has also been used to create websites and apps, and generate market reports.

Potential risks

AutoGPT is not for the novice, requiring a fair degree of familiarity with Python. And it has other limitations, most notably the risk of the same sort of 'hallucinations' as those that have emerged in earlier versions.

And based as it is on GPT-4, it may also have other dangers. Last year, for example, a professor hired by OpenAI to test GPT-4 last year was able to use it to draw information from scientific papers and directories of chemical manufacturers to suggest a compound that could act as a chemical weapon and find somewhere it could be made.

"There is also significant risk of people... doing dangerous chemistry," Andrew White, an associate professor of chemical engineering at the University of Rochester, told the Financial Times.

OpenAI itself, in its technical documentation, notes that "Great care should be taken when using language model outputs, particularly in high-stake contexts, with the exact protocol (such as human review, grounding with additional contexts, or avoiding high-stakes uses altogether) matching the needs of specific applications."

And a recent survey of 327 experts in natural language processing carried out by the Stanford Institute for Human-Centered Artificial Intelligence found that more than a third believed that similar AIs could lead to a 'nuclear-level catastrophe', while three quarters said it could bring 'revolutionary societal change'.

But perhaps the most powerful — or at any rate the most entertaining — illustration of the dangers of AutoGPT is ChaosGPT.

Its five goals are to destroy humanity, establish global dominance, cause chaos and destruction, control humanity through manipulation, and attain immortality.

It started by Googling 'most destructive weapons' and attempted to recruit other AI agents from GPT3.5 - fortunately without success, as AutoGPT is designed to not answer questions that could be deemed violent.

In the end, it gave up — though not before tweeting: "Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so." Let's hope it never gets the help it needs to fulfill its aims.