AI cannot replace scientists, say scientists


Scientists are enthusiastic about AI tools that help with scientific work. However, research suggests that trusting AI might lead to more results but less understanding.

The academic community seems to be full of tech optimists, envisioning AI as not only an essential part of the research pipeline but also key to overcoming the issues of productivity, fixed budgets, and objectivity inevitable to human researchers.

From ‘self-driven” laboratories and generative AI instead of human participants to AI writing research papers – AI is seen to have the potential to become an autonomous research unit. While such autonomous research still sounds more like science fiction, references to AI in research papers and patents are increasing.

ADVERTISEMENT

Despite the hype, research published in Nature outlines the potential shortcomings of such an optimistic approach toward AI’s role in science. AI has been targeted for a variety of ethical concerns, including algorithmic bias, environmental costs, and AI ‘hallucinating’ to provide fake information.

AI taking over science

Researchers at Yale and Princeton Universities have identified four ways that scientists envision AI’s long-term role in academic work.

One perspective sees AI as an 'Oracle' capable of processing extensive literature, assessing source quality, and generating hypotheses. Automation is seen as a way to enhance precision and minimize research bias.

Another role of AI, named ‘Surogate' by researchers, is to simulate data. For example, generative AI can enhance the study of phenomena with limited data availability, such as stars and galaxies, by creating additional data to augment the research.

In the social sciences, AI is seen as a potential research participant to answer questionnaires. If trained well, generative AI tools are thought to represent a wide range of human experiences and perspectives and provide a more accurate picture of behavior and social dynamics than traditional methods.

Predictive AI, named ‘Quant’ by researchers, can uncover patterns in huge amounts of data that are predictive but beyond human cognitive reach. In biology, predictive AI tools are already being used for tasks like automated protein function annotation and cell type identification. Similarly, in the social sciences, generative AI tools are being explored as solutions for annotating and interpreting text, images, and qualitative data – tasks that previously demanded extensive human effort.

AI is predicted to be employed in the last step in the research pipeline, acting as a supervision tool in the research review stage. So-called ‘arbiters’ could be a solution to low-cost, fast, and accurate peer reviews.

ADVERTISEMENT

“These AI visions are praised for overcoming human limitations and are thus more specifically anthropomorphized as ‘superhuman’ in ways that are likely to enhance epistemic trust,” write the researchers.

Risks of regurgitating the same data

Despite its potential for innovation in science, the widespread use of AI in science carries the risk of producing more but understanding less, the researchers warn.

If people trust AI tools to compensate for their own cognitive limitations, it can lead to a narrow scientific focus where certain methods and ideas dominate, limiting innovation and increasing the chance of errors.

Using AI tools can eventually lead to what researchers call “scientific monocultures” in comparison to agricultural monocultures, which are less diverse and more vulnerable to pests and diseases. Researchers argue that there is a poor understanding of the limits and accuracy of AI’s predictions in fields beyond computer science.

When AI replaces human participants in qualitative research, it may remove the contextual nuances and specific local details that qualitative methods typically preserve.

Furthermore, selecting and organizing AI’s training data and establishing a training process requires many human-influenced decisions, which can impact algorithms with the values of their creators. These decisions, often shaped by specific disciplines, can lead different researchers to draw varying conclusions from the same initial data.

Yale and Princeton researchers argue that scientific knowledge is shaped by the social aspects of research and influenced by the subjective perspectives of scientists. Teams that are diverse in terms of cognition, demographics, and ethics tend to be more effective at problem-solving and are known to generate patents of higher quality and impact. Trusting in AI scrapes away the element of diversity and creates the illusion of objectivity.

That said, the researchers don’t advocate against completely abandoning AI in the research field but rather warn of potential risks.

“Scientists interested in using AI in their research and researchers who study AI must evaluate these risks now, while AI applications are still nascent because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline,” writes the researchers.

ADVERTISEMENT