A stupid mistake exposes numerous members of academia using ChatGPT in their research papers, threatening the future of academic writing.
Using AI-based tools to help with writing tasks has become the new norm. However, apart from correcting grammar and style mistakes, many members of academia are apparently using ChatGPT, a chatbot based on a large language model, to lay down their entire scientific findings.
Paradoxically, in many cases, you don’t even need special software to determine academic cheating with ChatGPT. A simple search at Google Scholar reveals numerous research papers containing the sentence fragment “As an AI language model…” This is a typical response that ChatGPT generates when it can not exactly answer the request.
For example: “As an AI language model, I don't have access to specific experimental results,” “As an AI language model, I do not have personal beliefs or emotions, but I can provide evidence-based information,” or “As an AI language model, I can not predict the future.”
Finding these sentences in research papers is extremely worrying because it means that neither the reviewers nor the authors themselves bother to read what they’re supposed to have written after simply copy-pasting the text from the ChatGPT application.
After a quick investigation, it becomes clear that the papers written with ChatGPT vary from social science, literature, and natural sciences even to health science. The authors' scientific qualifications also span from students pursuing their degrees to established professors.
Cybernews contacted the editor of one of the academic journals hosting such a publication, the International Journal for Multidisciplinary Research, but at the time of writing, has not yet received a comment.
The full extent of ChatGPT use in academic writing is hard to measure, as there might be far higher numbers than these revealed cases show.
A threat to academic integrity
Universities have long been known to use software to check plagiarism. However, the threat of ChatGPT use is relatively new. The AI-generated text has the ability to bypass plagiarism checkpoints because the AI isn’t, in fact, plagiarizing – it’s generating new text.
Chris Hathaway, a teacher, Yale graduate, and founder of Advantage Ivy Tutoring, told Cybernews that AI is not some potential risk to academic integrity, but it is “a very real, ongoing threat presenting a spate of challenges.”
"In March of this year, three British academics published a research paper in a well-respected journal about the risks and rewards associated with the influence of AI on academic work. The paper was picked up and published. It was written entirely by AI. The editors of the journal were aware this was the case, but readers and reporters were not, many of whom were fooled briefly or entirely by the AI ghostwriter,”
told Hathaway.
Hathaway agrees that AI has a lot to offer when it comes to its ability to rapidly consolidate information and present it in easily digestible formats. However, the scholar is skeptical, as this functionality should be approached with caution.
AI is known for hallucinating information, so whatever it generates should be diligently fact-checked across sources to avoid unfortunate outcomes related to imprecise or strictly fabricated information.
“The bottom line is, AI-related threats are real and here to stay. It’s incumbent on engineers and decision-makers in academia as well as the corporate world to do what they can to mitigate the risks by investing heavily in high-quality AI detection systems and crafting anti-AI policies that are front and center, and clear as day," said Hathaway.
AI is just a tool, but the peer-review process needs to change
Sergio Tenreiro de Magalhaes, Chief Learning Officer at Champlain College Online and an Associate Professor of Cybersecurity and Digital Forensics, is more optimistic about the future of AI and academia.
He counts all the technology that scientists have created to measure reality and analyze data, such as microscopes, spectrophotometers, X-Ray machines, or computer-based environments that simulate the physical behavior of complex systems.
According to him, AI is yet another tool – that has been used for more than twenty years now – to help researchers. Generative AI in general and ChatGPT in particular can write the research paper, but that’s not the most significant part of the scientific process.
“Academic research papers are just the way researchers use to communicate their results to the community. What really matters is the underlying research, which is supposed to be obtained using methodologies that guarantee a high level of confidence in the results. Over time, this process has changed significantly, as our understanding of research methodologies changed, and the tools available to researchers also changed,” Tenreiro de Magalhaes told Cybernews.
The scholar agrees that the process of publishing research was never a perfect system, because there can be, and often are, errors in the research process, as there have always been unscrupulous researchers. That’s why research papers are subject to peer review.
“Unfortunately, the current peer review process is often not a serious effort to guarantee a certain degree of confidence in the quality of the paper, but rather a rubber stamp that exists only for the benefit of the publication's reputation, which can claim, to the public and to indexers, that it is peer-reviewed,”
explained Tenreiro de Magalhaes.
The number of papers submitted daily by the research community to thousands of academic publishers makes the task of carefully reviewing them almost impossible. Also, reviewing research papers is not a task that is significantly appreciated or rewarded, neither by research institutions nor by scientific publishers.
“This is what needs to change. The peer-review process must change, to reward reviewers for their peer-reviews, and to associate accountability with that reward," he concludes.
Your email address will not be published. Required fields are markedmarked