People want journalists to be transparent about their use of AI. However, trust in the media quickly drops after they disclose that they use AI applications. And yet, AI seems to be the future of newsrooms.
Together with 50 other media professionals gathered in Atlanta for the annual Online News Association conference, I was asked to create a vision of how a newsroom might look in 2030.
My group developed Verbinden (it means connect in German) – an interactive avatar news assistant that would tailor news feeds to individual interests and preferences. We designed it to help newsrooms, too – talking to its human consumers. The avatar could forward their questions and concerns to the reporters, resulting in more community-focused journalism.
While all six projects were quite different, they had a few things in common. We believe that the future of news is hyper-personalized and curated by AI algorithms. All news consumers in the future will get a very different news feed based on user behavior and maybe even their moods. For that experience, users will be asked to trade in their data, including letting newsrooms read their moods or even brain waves.
It pains me to say there won’t be many job positions available in these future newsrooms. Hacks/Hackers, a non-profit that brings journalists and technologists together, also tasked us with figuring out a revenue model for such media startups and drawing an organizational chart consisting of 30 people. Journalists and other editorial staff would make up no more than 30% of the workforce – in most cases, even fewer.
But before we build that future newsroom, there’s a significant obstacle to overcome.
People don’t trust AI
Audiences already have low trust in journalists, and reliance on AI to cut costs only exacerbates this problem.
Trusting News, a research organization that aims to help journalists rebuild trust, says that 94% of people want newsrooms to be transparent about whether they use AI applications.
Your instinct would then say, "Okay, let’s tell them about it." But once you do, their trust in your work drops even further.
They might be somewhat ok with journalists using AI tools for transcribing interviews and checking grammar, but God forbid you let AI do data analysis or introduce an AI newscaster. Publishing AI stories without any human review is a no-go.
AI labels hurt content
On social media, you can now label your content if it was made using AI. However, brands and newsrooms should be wary of AI content in general. First, the developers behind Google and other platforms’ algorithms claim (and seem) to prioritize human-made content.
And, again, there’s trust. Researchers from Zurich University concluded that news consumers are skeptical about AI-written headlines as they believe they might be inaccurate.
The article, published in the peer-reviewed PNAS Nexus journal, dives into two online experiments among 4,976 US and UK participants to investigate the effect of labeling headlines as AI.
Readers are less likely to trust AI-written headlines and share those articles. And it doesn’t matter if they’re accurate or not.
They argue that the more AI labels people see, the less likely they are to go online for trustworthy information, as they believe the internet is saturated with AI-generated content.
“Past work on “automated journalism” and “robot journalism” has shown that people rate human-written news as much more readable, of slightly higher quality, and as equally credible than computer-written news,” researchers explain.
But who cares?
Journalists are using AI tools, whether you know it or not. Hopefully, they’ll never publish any AI-assisted material without a human review.
However, as with many other waves of “great” discoveries on how to cut costs, we (journalists and news consumers) might not have a choice.
Rumors in the industry say that around 10% of journalists have been laid off in the US in the last year. Rumors are rumors, but we know that layoffs are hitting the news industry quite hard. Business Insider, Forbes, Time, NBC News, MSNBC, and Vice Media, among other outlets, are letting people go.
The temptation with AI is just too big. AI is not more intelligent than we are yet and surely not more creative. Yet, it works fast and doesn’t complain.
By the way, have you met our AI Joe – an AI newscaster? We promise he’s just reading the news out loud – the scripts are written and fact-checked by the humans at Cybernews.
Your email address will not be published. Required fields are markedmarked