Snapchat users freaked out on Tuesday when Snapchat's AI chatbot went rogue, posted a video in a story, and stopped responding to users’ messages.
On Tuesday, many Snapchat users freaked out as the AI chatbot posted a story on its own. This functionality was still unavailable for a chatbot. “My snap ai just posted a story. I’m extremely frightened,” wrote a Snapchat user on X.
The story was a one-second video, showing something looking like a room setting and a fragment of ceilings. The video made the users question whether their privacy was violated and whether they were filmed, as some of them recognized the colors of their walls in the video.
After posting the story, the chatbot stopped responding to the messages. One user shared a chat recording with AI, where it was not willing to talk about the posted story and kept saying it was busy to chat now.
Finally, the user managed to get some responses about the posted story. AI said it remembered the posted story and asked what the user thought of it. “Sometimes I like to get a little bit creative,” wrote the chatbot. “I guess I got a little carried away. Won’t happen again, I promise.”
Snapchat representatives told CNN on Wednesday that the chatbot’s behavior was a glitch, and now it is fixed.
Snapchat introduced AI chatbot called “My AI” in May this year. The social app integrated ChatGPT to power its chatbot. In a review published in June, Snapchat reported that over 150 million users had already sent 10 billion messages to My AI, making it “among the largest consumer chatbots available today.”
The company claims that it has trained the AI to follow trust and safety guidelines and not give responses that include cursing, violence, sexually explicit content, or personal viewpoints on sensitive subjects like politics. However, since its launch, it was reportedly involved in controversy for inappropriate responses to minors.
More from Cybernews:
Subscribe to our newsletter