Research has found that students' extensive use of ChatGPT might be connected to their lack of cognitive skills, which are crucial for success in the academic world.
Generative AI tools like ChatGPT and Gemini have recently taken the world by storm, but the productivity gains they promise haven't been equally welcomed in every sector.
The educational world has sparked a debate over whether AI usage might encourage cheating, even as discussions continue about AI's potential benefits for enhancing learning and promoting equality in education.
Researchers at the Department of Psychology at Lund University in Sweden conducted a study looking into how students' use of AI tools correlates with their performance.
Lower cognitive skills make students turn into AI
So-called executive functioning (EF) refers to the cognitive processes essential for planning, focus, and attention, including working memory, inhibition, and cognitive flexibility. These processes are vital for completing tasks that require significant mental effort, such as writing papers in an educational setting.
Lower EF levels are associated with poorer academic performance, difficulty in planning, problem-solving, social interactions, and incomplete schoolwork. In the long term, strong EF skills contribute to academic success, stable employment, and resistance to substance abuse later in life.
According to the Nordic Youth Barometer report in 2023, Swedish adolescents used generative AI tools as study aids for assignments that challenged their EF.
“Students with more EF challenges found these tools particularly useful, especially for completing assignments,” said Johan Klarin, a school psychologist and research assistant at the Department of Psychology at Lund University.
“This highlights these tools’ role as a potential support for students struggling with cognitive processes crucial for academic success.”
While AI tools generally boost productivity for professionals with lower skill levels, their use in learning environments can have a downside, as increased efficiency in completing assignments may come at the cost of genuine learning.
According to researchers, overreliance on AI tools could hinder or delay the development of EFs and students' learning. “This should be carefully considered when implementing AI support in schools, and the effects should be studied longitudinally,” said project leader Dr. Daiva Daukantaitė, an associate professor at Lund University.
When assignments challenge students, they use AI
The researchers conducted two studies: one with 385 adolescents aged 12 to 16 from four primary schools in southern Sweden and another with 359 students aged 15 to 19 from the same high school.
The results showed that about 15% of younger teens and 53% of older students used AI chatbots. This may be because older students have more complex assignments, leading to more frequent AI use.
The studies, conducted nearly a year apart, also suggest that AI usage became more popular over time. Notably, students with weaker EFs found AI significantly more helpful for schoolwork, likely because they benefited more from the productivity boost.
Is it cheating?
The use of generative AI tools in higher education is widespread. A German study in 2023 showed that two-thirds of university students use these tools for tasks like text analysis, creation, problem-solving, and decision-making.
A similar survey in Sweden found that 75% of adolescents and young adults aged 15–24 use generative AI for educational purposes, such as structuring presentations and papers, writing texts, studying, and social support.
There is a fine line between what could already be considered cheating and what is not in an educational setting. Using ChatGPT to complete entire assignments or solve problems and then submitting the results as one's own is considered cheating.
However, when students critically engage with the AI-generated content and add their own understanding and effort, it can be viewed as a legitimate aid. Responsible use of ChatGPT, particularly for students struggling with executive functioning, could include using it for research, idea generation, and grasping complex concepts.
“Educators should provide guidelines and frameworks for appropriate use. Teaching digital literacy and ethical considerations is also crucial,” Klarin said.
“The line between cheating and using AI tools as an aid should be drawn based on the intent and extent of use,” says Klarin.
Cybernews has previously reported on numerous research papers being written using ChatGPT and circulated in academic circles, threatening the future of academic writing.
Research papers' authors have been known to leave sentence fragments in their texts, such as “As an AI language model…” This is a typical response that ChatGPT generates when it can not precisely answer the request.
Also, plagiarism and similarity detection service Turnitin revealed statistics that of over 200 million papers tested, over 22 million, or approximately 11% of works, had at least 20 percent AI writing present.
Your email address will not be published. Required fields are markedmarked