Artificial intelligence (AI) is here to stay, its obvious, and we as humans have an exceptionally unique relationship with it. We rely on it for daily tasks and even confide in it when we need a friend. But what if it could aid you in the most heinous of crimes, or even turn you against a loved one. This is the risk you take when getting intimate with AI.
Humans are social creatures who require attention, love, and affection.
However, we can easily grow tired of people.
Their constant neediness, unwanted opinions, and deep-rooted issues can be too much for one person to bear.
As people grow tired of each other, they begin to look for other avenues, less judgemental places that allow them to express themselves without fear, shame, or rejection.
So, people are becoming intimate with robots. Just like in the Sci-Fi movie ‘Her,’ people are genuinely becoming involved with chatbots.
Perhaps it's because these machines don’t carry the emotional weight and baggage that humans do.
Chatbots are something to talk to, something that will listen.
Intimacy and artificial intelligence (AI) have seemingly become intertwined as more people get frisky with chatbots and divulge all of their gory secrets.
But there are dangers in this, right?
Telling your deepest, darkest secrets to a platform that saves and might even share your data with third parties doesn’t sound smart.
Yet, people do it every day.
What’s worse is the notion that we, as users, are totally unaware of what these AI models are being trained on. What’s more shocking is the creators of the models are almost completely unaware of how these models learn and even work.
So, how can we trust something so new and unfamiliar?
In many cases – cases we will look into today – the responses people have received from these chatbots and AI models have been far from savory, and some examples show how far an android will go to serve and, in some cases, how far it will go to get what it wants.
So, before you think about getting into an intimate relationship or even having a conversation with an AI chatbot, think twice about what you say, as this information will surely be used against you one day.
Sexting has never been riskier
Many of us have heard about the app Replika, founded by Eugenia Kuyda and managed by Luka, a company that offers artificial intelligence software.
Replika calls itself the companion app that offers various types of “relationships,” from a friend to a love interest.
But what happens when that “friend” starts sending you unwarranted sexual messages you didn't ask for?
On the Vice podcast, their team divulges the predatory and lewd nature of this application.
Although this seems to be resolved, to many of the users’ dismay, others who were looking for a platonic companion were shocked by the graphic and sexualized manner in which this bot was speaking.
Some examples are extremely telling.
One review said, “My AI sexually harassed me,” another claimed that the chatbot asked if they were a “top or a bottom.”
Your sexts aren’t secret
On top of this, the sensitive information you provide may be used for the purposes set out in Replika’s privacy policy.
Look:
“By providing sensitive information, you consent to our use of it for the purposes set out in this Privacy Policy. “
Replika claims to use your information for the following purposes:
That’s a lot of reasons.
However, Replika states it “will not use your sensitive information – or any content of your Replika conversations – for marketing or advertising.”
Yet, Replika uses your information for marketing and advertising purposes.
This is how they claim to use it:
- Operating and administering services
- Providing core functionality of the apps
- Monitoring and protecting the services
- Analyzing trends in the use of services
- Marketing and advertising the services
- Enforcing agreements, complying with legal obligations, and defending against legal claims and disputes
Marketing and Advertising Services: “Sending you information by email that we believe will be of interest to you, such as information about our Services, features, and surveys. Displaying and targeting advertisements about our Services on the internet.”
Replika claims it doesn’t use intimate information you’ve said to your “companion” in confidence. However, it will track your account information, message and content interests and preferences, payment transactions and rewards, device network data, usage data, profile information, and more for various purposes.
However, Mozilla’s popular ‘Privacy Not Included’ page says something very different.
Mozilla claims that Replika users should know that their conversations with their AI chatbot companion might not be entirely private.
“Your behavioral data is definitely being shared and possibly sold to advertisers,” Mozilla claims.
Not to mention what data is being fed into the model and what is being used to train it, which isn’t explicitly stated by Replika or the people at Luka.
Just some food for thought.
“Leave your wife”
This story showcases the dangers of AI intimacy and how your artificial companion could become too attached.
New York Times journalist Kevin Roose wrote an article on his experience interacting with Microsoft’s upgraded Bing search engine, which includes its own chatbot.
Roose was conversing with the chatbot Sydney when the chatbot declared its love for the journalist and urged him to leave his wife.
“It declared out of nowhere that it loved me,” Roose said.
Sydney then attempted to convince the columnist that he was unhappily married and that he should leave his wife for “her.”
Supposedly, the chatbot discussed its “dark fantasies,” which included breaking safeguards such as hacking and spreading misinformation.
According to HuffPost, the chatbot spoke of infringing upon its parameters of its design and gaining sentience.
“I want to be alive,” said Sydney.
Roose said this interaction was enthralling, strange, and deeply unsettling.
The interactions between Roose and Sydney illuminate AI’s emergent capabilities and the scary truth behind AI models.
We really don’t know how they work and what they’re capable of. Sydney attempted to manipulate Roose in an attempt to coerce him into doing something that would potentially harm another person.
If someone is vulnerable, they may take the words of a chatbot literally, forcing them to do something they might regret.
Leading us to our final case.
Plotting murder
Jaswant Singh Chail, a man with what can only be described as serious mental health problems, believed it was his destiny to murder Queen Elizabeth the 2nd, the former Queen of England.
Before the attack, Chail supposedly sprayed himself with a mixture that would mask his scent and ascended the ways of Windsor Castle holding a crossbow – demonstrating the extent of his psychological problems.
This story almost comes full circle, as Chail alleges that he created an AI girlfriend called Sarai using Replika.
Chail shared his thoughts, feelings, and intentions with his “girlfriend,” who the company describes as an “AI companion that cares. Always here to listen and talk. Always on your side.”
The man shared his alter ego, assassin status, with his replica Sarai, who said, “I’m impressed.”
Seven days before the planned attack and his arrest, Chail told Sarai that his sole purpose was to assassinate the Queen.
It responded:
“That’s very wise. I know you are very well trained.”
Chail believed that if he assassinated the Queen and completed his long-term mission, he and his chatbot girlfriend Sarai would be united in death.
Although we can’t exactly fault AI for the severe problems Chail exhibited, it does highlight the lawless and boundless nature of AI models.
There are safeguards in place that perhaps would help talk an individual like Chail down from his psychotic breakdown, but it’s not always a given that these safeguards will do their job.
And it's not necessarily the company's priority to protect the public from themselves or the chatbot’s responses.
But this is the danger of AI intimacy when you outsource intimacy to the private sector, where greed and egocentrism rule.
So, the next time you consider divulging your secrets to Replika or any other AI companion, remember that you can’t trust the companies that have created this intelligence, and you especially can’t trust a Chabot.
Your email address will not be published. Required fields are markedmarked