Artificial intelligence (AI) already matches the top 1% of human thinkers on a standard test of creativity, according to new research.
ChatGPT, powered by the GPT-4 large language model, already beats the majority of creative thinkers in creativity tests. A study conducted by the University of Montana (UM) and its partners indicates that the current-generation ChatGPT can rival the top 1% of human thinkers.
The Torrance Tests of Creative Thinking (TTCT) was used in the study, a well-known tool for assessing creativity. Researchers generated eight responses with ChatGPT and submitted them with answers from a control group of 24 students, who came from entrepreneurship and personal finance classes. While completing the test, ChatGPT couldn’t “cheat” by accessing information online or in public databases.
All submissions were scored by Scholastic Testing Service, which didn’t know that AI was involved. Then the scores were compared with 2,700 college students nationally who took the TTCT in 2016.
The results positioned ChatGPT in the top percentile for fluency, which measures the ability to generate a large volume of ideas and originality. However, the AI slipped to the 97th percentile for flexibility – the ability to generate different types and categories of ideas.
“For ChatGPT and GPT-4, we showed for the first time that it performs in the top 1% for originality,” said Dr. Erik Guzik, an assistant clinical professor at the UM’s College of Business who directed the study. “That was new.”
A number of the 24 students managed to reach the top 1% as well. However, ChatGPT outperformed the vast majority of college students nationally.
The researchers carefully interpreted the results and did not jump to conclusions.
“But we shared strong evidence that AI seems to be developing creative ability on par with or even exceeding human ability,” Guzik admitted.
However, ChatGPT itself disagrees. While presenting the work at the Southern Oregon University Creativity Conference, Guzik shared ChatGPT’s generated answer to the question: What does good performance on the TTCT mean?
“ChatGPT told us that we may not fully understand human creativity, which I believe is correct,” he said.
Maybe it is time for better tests? Also, yes, according to ChatGPT.
“It also suggested we may need more sophisticated assessment tools that can differentiate between human and AI-generated ideas,” Guzik said.
This is not the first time that researchers have found that AI can outperform humans. Previously, human creativity was thought by many to be untouchable.
How does the creativity test work?
The TTCT test uses prompts that mimic real-life creative tasks. For instance, can you think of new uses for a product? Or improve it?
“Let’s say it’s a basketball,” Guzik said. “Think of as many uses of a basketball as you can. You can shoot it in a hoop and use it in a display. If you force yourself to think of new uses, maybe you cut it up and use it as a planter. Or with a brick you can build things, or it can be used as a paperweight. But maybe you grind it up and reform it into something completely new.”
Guzik expected that ChatGPT would be good at creating ideas for a fluency evaluation because that’s what generative AI does. And it indeed excelled at responding to prompts with relevant, useful, and valuable ideas in the eyes of the evaluators.
More surprising was how well it did in generating original ideas. Evaluators will have lists of common or expected responses for specific prompts. However, AI consistently came up with fresh and unexpected responses, earning a place in the top percentile.
“For me, creativity is about doing things differently,” Guzik said. “One of the definitions of entrepreneurship I love is that to be an entrepreneur is to think differently. So AI may help us apply the world of creative thinking to business and the process of innovation, and that’s just fascinating to me.”
The professor now thinks that AI is a game changer in entrepreneurship and regional innovation. The UM College of Business is considering teaching about AI and incorporating it into coursework.
“I think we know the future is going to include AI in some fashion,” Guzik said.
Guzik himself has long been interested in creativity, dating back to his involvement in a program for talented and gifted students in seventh grade. He became acquainted with the Future Problem Solving process developed by Ellis Paul Torrance, the pioneering psychologist who also created the TTCT. Guzik remains active in the organization.
The idea to test ChatGPT’s creativity came after playing around with it during the past year.
“We had all been exploring with ChatGPT, and we noticed it had been doing some interesting things that we didn’t expect,” he said. “Some of the responses were novel and surprising. That’s when we decided to put it to the test to see how creative it really is.”
Guzik was assisted in the work by Christian Gilde of UM Western and Christian Byrge of Vilnius University.
Your email address will not be published. Required fields are markedmarked