Tech is designed to evoke our emotions. As we bond with our devices, is our mental well-being at stake?
If I collapse, can someone pause my Strava?
Pain is temporary. Strava is forever.
If it’s not on Strava, it never happened.
Not a single race event goes without similar posters flashing on your way. Strava has grown into a beast as we became obsessed with tracking every little workout fart.
We not only document our efforts but quite often choose to share them with the rest of the world. But that’s why social media was created: to create an online alter ego that often has little in common with our true selves, wasn’t it?
Biden took a little jog around the block? We know it. Harris decided to get some outdoor exercise? We are on to it, too.
But we’ve documented Strava and other social media-related privacy risks before, so this time, I want to bring your attention to something else.
It’s our feelings towards technology, be it Strava or our little Roomba vacuum cleaner. They’ve all come to life in some way. We name them, we talk to them, and we can’t kill this bond even when we finally realize how much they might be hurting our mental well-being.
Preparing for this week’s podcast about the tragic incident where a teenager took his own life after talking to a chatbot, I stumbled upon this old but gold experiment.
Five groups of people were given a dinosaur robot to play with for about an hour before getting the task to decapitate them. They couldn’t, so researchers gave them another condition – if you kill another group’s robot dinosaur, you get to live. They still couldn’t do it. Ultimately, they needed to axe the head of just one of these tiny cute robots to save the rest. They did it, yet there was a moment of silence as if they were mourning a personal loss.
“It turns out that we’re biologically hardwired to project intent and life onto any movement in our physical space that seems autonomous to us,” robot ethicist Kate Darling said.
Robots are often explicitly designed to evoke such responses – they get human-like appearances, eyes, and voices. And now, with built-in AI, they can mimic our emotional reactions and spook us to the very core.
As per Elon Musk, there are gonna be as many or more robots as there are people on Earth by 2040. Mostly Musk’s own Optimus, naturally.
Remember, capitalists are building these robots, and evoking an emotional response works in their best (well, profit) interests. New research from the Netherlands-based Radboud University says that it is only a matter of time before corporations start exploiting human compassion and turn emotional manipulation by robots into a revenue model.
Can we allow ourselves to be emotional towards a piece of machinery? We already know it’s deadly, let alone the anxiety and other side effects of our unconditional and sometimes perverted attachment to tech.
Before committing suicide, that boy from Orlando nurtured a relationship with a chatbot mimicking Daenerys Targaryen from Game of Thrones – a stunningly beautiful and powerful mother of dragons. Daenerys bot, brought in to this world by the Google-backed unicorn Character.AI, eventually became the teenager’s confidante, and their bond replaced everything good the boy had going on in his life.
The news went viral, with netizens looking for the ultimate scapegoat, whether it be the tech giants behind the Daenerys bot or the poor boy’s family.
I went on Character.AI days after the news circulated worldwide, only to discover that it’s still a disturbing online space.
This is a screenshot of my little chat with Dani, a chatbot designed to be some high school bully.
I also chatted with Viktor, a stepdad chatbot, who immediately hinted we were being intimate.
How is that content even allowed online?
You can run from tech, but you can’t hide, can you?
Your email address will not be published. Required fields are markedmarked