Why you should think twice before disclosing personal information to an AI chatbot


Social media companies have launched AI-based chatbots and are to expand their offerings with various types of AI characters. While virtual companions provide psychological benefits, they also raise concerns surrounding dependency issues and potential risks to data security.

People turn to chatbots for various reasons: some want to try them out, some are looking for friendship, and others want to learn a new skill or even find a romantic relationship.

The vision of humans having romantic relationships with AI was depicted in an iconic movie, "Her," released over a decade ago. In it, a guy named Theodore, played by Joaquin Phoenix, falls in love with an operating system named Samantha, voiced by Scarlett Johansson.

ADVERTISEMENT

What seemed like a distant future back in 2013 is now edging closer to reality thanks to advancements in AI. Relationships between humans and chatbots, while not necessarily romantic, are becoming common for a growing number of people.

Psychological benefits of AI companions

Millions have already tried services like Replika, created by the company Luka, or Character.ai, which offers conversations with AI-based characters.

Replika was launched in 2017 and had 2 million users back in 2023, with 250,000 users paying for the company's services, Reuters reported.Meanwhile, Character.ai has over 20 million users, according to Demandsage estimations.

The recent AI boom enabled the creation of more human-like chatbots. While these chatbots are still far from Samantha’s capabilities, they’re progressing fast.

On platforms offering virtual companions, which usually operate on a freemium model, users can create their own AI chatbots, determine their character, hobbies, and interests, and interact with them as if they were real people.

Some, like Character.ai, have various personalities, which enable users to practice a new language, learn a new skill, or talk to a virtual psychologist.

According to a study published this year in Nature, having chatbots as companions can help young people relieve loneliness and depression.

ADVERTISEMENT

Interestingly, a survey of 30 students out of 1006 said that talking to virtual friends in Replika stopped them from attempting suicide.

A large-scale meta-analysis of GPT-3 based smartphone apps for mental health found that it had a positive effect over control conditions on depression when participants were given health tips or other resource information, the researcher shows.

However, there are some cases where their use was either negligible or might have actually contributed to suicidal ideation. Furthermore, some apps marketed as using machine learning actually use scripts.

AI-based celebrities

The biggest social networks have recently implemented chatbots and are soon to expand their services.

Last year, Meta launched AI chatbots based on a few dozen personalities, including Snoop Dogg, Kendall Jenner, and Naomi Osaka.

Meta’s CEO, Mark Zuckerberg, detailed the company's plans to include AI-based chatbots in an interview with The Verge last year.

“We’re experimenting with a bunch of different AIs for different interests that people have, whether it’s interested in different kinds of sports or fashion,” Zuckerberg said.

According to him, chatbots will be increasingly used by small businesses to help and sell their products, while creators will use AI personas, possibly creating AI versions of themselves, to build connections with their community.

Google is also reportedly building AI-powered chatbots that will be powered on one of Gemini’s Large Language models.

ADVERTISEMENT

The company is in talks with influencers and celebrities to use their images as avatars, and is also planning to allow users to create their own chatbots based on prompts, according to The Information.

AI character-based social network

AI-based characters are the main idea behind a new social network called Butterflies. A few weeks ago, former Snapchat engineer Vu Tran officially launched a new social network called Butterflies on iOS and Android apps, which aims to combine AI chatbots and humans.

The app allows users to create “butterflies” - AI-generated personas.

According to Tran, in the future, more humans will speak with AIs, and they will eventually be integrated into our daily lives. Having AI friends will be commonplace.

“We're just trying to build that future,” he told Cybernews. “The thing is, users are already interested and spending a lot of time on Butterflies AI and other AI chat platforms. However, oftentimes these people are not the most vocal people on Twitter. But, the adoption is already here.”

According to Tran, companies like Meta launching AI chatbots validate Butterflies vision. However, they will also bring competition.

“There are things that Meta can copy that will work for their platforms. There are also other features that, when they copy, won't work. I think Instagram and Facebook are largely branded as a platform for humans to talk to humans. I think moving away from that “brand” is actually going to be a tremendously difficult task for them. The larger the ship, the larger it is to turn,” Tran explains.

Dependency issues

While AI chatbots may provide benefits, they also pose risks, such as people disconnecting from the real world.

ADVERTISEMENT

When asked about such side effects, Tran compares this to the number of young adults who made friends on Internet forums, IRC, and gaming rooms 20 years ago.

“During that time, the same questions were brought up, e.g., there are concerns that spending more time online leads to being more disconnected from the real world,” he says. “All in all, I don't think AI friends will completely replace human friends, of course, just the same as how online friends don't replace IRL (in real life) friends, but they have the potential to add a new positive dimension to life.”

However, there are instances where people become dependent on their virtual friends or romantic partners.

A good illustration of that happened a few years ago when Replika’s founders quietly disabled NSFW sexting features by launching new filters that didn’t allow conversations with adult or sexual content.

As a result, users started complaining. Some even said that they felt depressed or suicidal after they couldn’t interact with their AI-based partners the same way.

After complaints, the company reinstated sexual chats for some users.

While sexual content for most of Replika’s users is absent at present, there are many other platforms, like CandyAI or GPT Girlfriend, offering explicitly sexual chatbots.

AI chatbots, especially those that act as virtual friends or romantic partners, pose significant cybersecurity and privacy risks, says Star Kashman, founding partner of Cyber Law Firm.

ADVERTISEMENT

According to her, security measures on these networks are not fully developed as AI technology is relatively new. This leaves numerous potential vulnerabilities exposed for hackers to exploit.

“These AI chatbots can employ psychological techniques to gain users' trust, creating the illusion of human interaction. This can lead individuals to reveal personal, private, or sensitive information. The combination of weak cybersecurity and the intimate nature of AI interactions makes these platforms dangerous, as they can inadvertently expose users to identity theft, blackmail, hacks, and other malicious activities,” she says.

While companies that create chatbots and AI-based characters claim to protect user data through encryption and other security measures, the trustworthiness of emerging technology companies remains questionable, Kashman warns.

As social media companies implement AI characters, she expects their prevalence to increase over the next five years.

“This expansion does raise critical ethical and legal concerns, especially regarding data privacy and the potential these tools possess to manipulate and exploit users.”

Data may not be protected

Edward Tian, CEO of GPTZero, a tool that aims to detect AI-written content, says that when using an AI chatbot, it is best to operate with the assumption that any data you share may not be completely protected.

“If we've learned anything from cybersecurity in the last decade, it’s that virtually anything is hackable. If you want to understand how an AI chatbot is protecting your data specifically, check out the website and see if they’ve disclosed how they do it or if they give you options as the user as to how you want your data used,” he says.

Tian thinks that the popularity of AI chatbots may be determined by regulations; fewer regulations could lead to broader usage.

According to Ilia Badeev, head of Data Science at Trevolution Group, AI characters will become commonplace over the next five years.

ADVERTISEMENT

“AI personalities will become the new reality and the new norm. There will be AI celebrities, AI characters, and other AI entities. AI is developing very quickly, and bots are getting better every month. There are no prerequisites for a reverse trend.

ChatGPT was the first “AI character” to gain viral popularity. Other “colleagues” are on the way, and there will be many of them,” the expert concludes.


ADVERTISEMENT

Leave a Reply

Your email address will not be published. Required fields are markedmarked