© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Mental health company KoKo testing AI chatbot on patients causes public outcry


As AI becomes more integrated with our lives, we grow increasingly concerned about ethical and legal regulations that should accompany its use. These are the questions a mental health company KoKo had to ask itself, having provided AI-written counseling to 4,000 people without informing them.

KoKo is a peer-to-peer non-profit mental health service that connects those in need of counseling to volunteers through various platforms like Telegram and Discord. Typically, users would chat with a Koko bot which will forward their message to an anonymous volunteer, who will then respond back.

But not this time. An experiment, which included 30,000 messages, employed a ‘co-pilot’ approach. Once a person in need types in their message, it gets forwarded to a volunteer who could then use OpenAI's GPT3 large language model to provide an answer. The AI-powered program is capable of writing anything from poems to code and providing articulate responses on a variety of topics.

Robert Morris, a Co-Founder of KoKo, said that the experiment allowed them to provide help to about 4,000 people.

Morris initially released a few tweets that heavily implied a lack of informed consent among users, such as: “Once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty.”

He also added that although messages composed by AI were rated significantly higher than those written by humans, users weren’t comfortable with the lack of genuine compassion and empathy expressed by a robot.

“It’s also possible that genuine empathy is one thing we humans can prize as uniquely our own. Maybe it’s the one thing we do that AI can’t ever replace,” Morris tweeted.

Twitter users replied to the thread with criticisms of the unethical nature of the experiment. They argued that this inherently breaks the social contract between a therapist and a user, violating trust and making those seeking help feel “dehumanized.”

Morris later clarified that his initial tweet was related to himself and his team, not the users. He also argued that the feature was opt-in, and everyone knew about it when it was live for a few days.

However, it still remains unclear what kind of information users were provided with prior to participating and whether they were fully informed of potential harms and benefits. The lack of such consent would render the experiment illegal in medical contexts, although online mental health services remain in the grey area of law as they operate outside of a formal medical setting. The experiment did not receive an approval from an Institutional Review Board (IRB), meaning that it ran without formal oversight.

Morris also told Motherboard that this experiment is exempt from informed consent (which would require a document signing), as KoKo didn’t use any personal information, according to Vice.


More from Cybernews:

Social marketplace exposes nearly half a million users

Apple Watch faces import ban in US for infringing med company’s patent

FAA computer outage causing flight disruptions in US, domestic departures paused

Which is more of a threat to the West: AI-written fake news or human trolling?

Government watchdog cracks thousands of passwords at US federal agency in minutes

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked