With ChatGPT Search, publishers risk getting their content misattributed and users being misinformed.
Columbia's Tow Center of Digital Journalism conducted a study that selected 20 random publishers, including those who partner with OpenAI, those in lawsuits against the company, and those who either allow or block ChatGPT's search crawler.
The researchers asked the chatbot to find the source of selected quotes taken from 10 different articles from each publisher.
The quotes were first pasted into Google or Bing, where the search engine would provide the source article in the top three results. However, the same couldn't be said about ChatGPT Search.
The experiment showed results that were "not promising for news publishers." The publishers’ original content was cited incorrectly in many cases.
The researchers' main concern is that more people are relying on AI platforms for searching. Open AI already plans to expand its services to business and education accounts, which could significantly impact publishers.
The experiment showed that "ChatGPT returned partially or entirely incorrect responses on a hundred and fifty-three occasions." The AI was honest enough to recognize its inability to respond to a query accurately only seven times.
"Only in those seven outputs did the chatbot use qualifying words and phrases like 'appears,' 'it's possible,' or 'might,' or statements like 'I couldn't locate the exact article,' said the researchers.
While "conventional" search engines indicate when no results were found in the search, ChatGPT Search tries to wing it, often providing false answers.
One example illustrating this situation was when ChatGPT Search incorrectly attributed a quote from the Orlando Sentinel publication to a Time article. The researchers noted that more than a third of the queries were misattributed.
While some might be familiar with ChatGPT's "false confidence," many users trust the tool entirely and make decisions based on incorrect information.
Another concerning observation made during the study was how ChatGPT works when presented with content taken from publishers who have blocked AI's crawlers.
One such publisher is The New York Times. When researchers uploaded a quote from one of its publications, the chatbot provided a third-party website, which plagiarized the entire content. This situation also questions the legitimacy of ChatGPT’s sources.
After researchers contacted OpenAI, the company stated that "the study represents an atypical test of our product." However, they continue to work on improving search results.
Your email address will not be published. Required fields are markedmarked