From Turing's test to ChatGPT: a brief history of AI


While ChatGPT’s release to the world catalyzed the generative AI revolution that we’re currently experiencing, there’s been a much longer path to the present day than you might think.

ADVERTISEMENT

AI's origins go all the way back to the mid-20th century when computing technology was beginning to develop. In 1950, British mathematician and logician Alan Turing proposed a test that has taken his name: the Turing Test.

The test, the measure of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, human intelligence, introduced the concept of machine intelligence to the wider world. In the Turing test, people would interact with an agent – either a human or a computer, they didn’t know which – and ask it questions. They’d have to try to decide from its responses whether it was human or a computer.

The Turing test set the basis for what would become known as AI. Researchers aimed to develop technologies that could pass the Turing test or trick humans into thinking that they, the machines, were human.

Artificial intelligence is born

At the time of the Turing test, “artificial intelligence” as a phrase didn’t exist. It took until a conference held at Dartmouth College in 1956 for the term to be born.

Organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, all of whom were leading researchers in the new field of computing, the conference brought together academics with the goal of exploring ways to make a machine simulate aspects of human intelligence. The meeting was a comparative flop, but it’s widely recognized as the birthplace of AI as a field of research.

By the 1960s, the first forays into developing AI began to take place. The main focus was on rule-based systems, also known as symbolic AI. The symbolic AI approach tried to replicate human intelligence by programming rules for decision-making into machines. Early AI programs such as SHRDLU and ELIZA captured the public’s imagination, and gave the impression that full AI was just around the corner.

Yet it wasn’t. In the 1970s, the world entered an AI winter, triggered by the pulling of government financial support for AI research in the US and UK. AI’s time in the shadows would last for around a decade.

ADVERTISEMENT

Leaving winter

The 1980s saw a resurgence in AI research, kickstarted by Japan throwing its finances behind the technology’s development, which started an arms race for other governments to support projects financially.

But the 1980s saw an overhaul of how AI worked. Symbolic AI fell out of fashion, replaced by machine learning, an approach that involves training algorithms on data, enabling them to ‘learn’ and improve their performance over time.

Machine learning was made more powerful by the backpropagation algorithm, a method used to train neural networks, computational models inspired by the human brain. The backpropagation algorithm was popularised by a 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald Williams, and became the de facto way to train AI for a decade or more.

In 1997, an IBM-backed AI project called DeepBlue managed to beat chess grandmaster Garry Kasparov over a series of multiple matches, which once again made people think that the Jetsons-like future of AI overtaking humans was imminent. However, the potential of AI was limited by computational power: AI still used CPU chips, even at its most powerful. That made AI slow.

Garry Kasparov
Garry Kasparov. By Shutterstock

GPU processing and the new millennium

Nvidia, founded in 1993, helped catalyze the AI revolution further. They developed GPUs (graphical processing units), which were initially used to make PC games run more smoothly. But AI researchers quickly realized that they could be put to work. They were used in a competition called ImageNet in 2012, which tested AI development teams’ ability to train computer vision algorithms to identify objects in pictures.

A team led by Geoffrey Hinton used a deep learning model to enter the competition, and won handily – with their model making half the number of mistakes as the second-place competitor. Hinton’s team’s model was powered by GPUs. Within three years, every entrant into the ImageNet competition used GPUs.

By now, the race was on to develop AIs using these powerful new chips. OpenAI was set up in 2015 to counteract the fear that Google, which had purchased DeepMind, a London-based AI company, was cornering the market. OpenAI was initially established as a non-profit venture that would responsibly develop AI, bankrolled by $1 billion of funding, primarily from Elon Musk.

ADVERTISEMENT

Within three years, Musk had left the organization, which had pivoted to a capped-profit company. At the same time, Google and other competitors in the space were developing even faster AI tools. But it took another paper, written in 2017, to kickstart the next era of AI.

Attention is all you need

In June 2017, an academic paper called ‘Attention is all you need’ was published on the arXiv preprint server. It was written by Google researchers and introduced a new concept in AI: the transformer. Transformer is the “T” in ChatGPT. GPT-2, a large language model, was released by OpenAI in February 2019.

But OpenAI didn’t release the full model publicly, because they were worried it would be used nefariously.

GPT-3 came soon after, and then GPT-3.5, which is the technology underpinning the original version of ChatGPT. OpenAI has since released GPT-4, which powers a paid-for version of ChatGPT, called ChatGPT Plus.

The chatbot interface has become the standard way that we interact with generative AI, from ChatGPT to Google Bard and Microsoft Bing Chat – alongside image generators like Midjourney and DALL-E 2. Today, some of the people developing AI tools are warning about its potential power to disrupt the world and the need to regulate the burgeoning technology.

What will happen next is anybody’s guess – but the history of AI is undoubtedly far from over.

ADVERTISEMENT