The emerging industry of “digital afterlives” will cause social and psychological harm, University of Cambridge researchers have warned.
“Be the favorite grandkid – forever,” reads the advertisement of a fictional company called MaNana in one potential scenario laid out by researchers in a paper on how digital recreations of dead people could come back to “haunt” their loved ones.
So-called griefbots, or deadbots, could be used to surreptitiously advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still with them, researchers warned.
Companies could also spam surviving family and friends with unsolicited notifications, reminders, and updates about the services they provide, which AI ethicists said would be akin to being digitally “stalked by the dead.”
Platforms offering to recreate the dead with AI for a small fee, such as “Project December” and “HereAfter,” already exist, with the technology also increasingly popular in China. Safeguards are needed to prevent it from causing harm, researchers said.
“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example,” study co-author Dr Katarzyna Nowaczyk-Basińska said.
“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded,” she said.
MaNana scenario is one of the three laid out in the paper authored by Nowaczyk-Basińska and other AI ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence.
It considers a conversational AI service that allows people to create a deadbot simulating their deceased grandmother without the consent of a “data donor.”
The hypothetical scenario pictures an adult grandchild who is initially impressed and comforted by the technology, but starts seeing food delivery advertisements in the voice and style of their grandmother once a “premium trial” ends.
Overridden with guilt over disrespecting the memory of their grandmother, the grandchild wants the deadbolt turned off, but in a meaningful way – something the service provider has not reckoned with.
“People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation,” said Dr Tomasz Hollanek, the study’s other co-author.
“Methods and even rituals for retiring deadbots in a dignified way should be considered. This may mean a form of digital funeral, for example, or other types of ceremony depending on the social context.”
“We recommend design protocols that prevent deadbots being utilized in disrespectful ways, such as for advertising or having an active presence on social media,” Hollanek said.
However, banning deadbots based on non-consenting donors would not be feasible, according to researchers.
Another scenario featured in the paper highlights a terminally ill woman who uses the services of an imagined company called “Paren’t” to create a digital replica of herself to assist her eight-year-old son with the grieving process.
Similarly to the first scenario, all goes well at first, but takes a sour turn as the AI starts generating confusing responses as it adapts to the needs of the child, such as depicting an impending in-person encounter with their mother.
To avoid these kinds of situations, the use of deadbots should be age-restricted and content warnings put in place, ensuring the user is always aware that they are interacting with an AI, the researchers recommended.
The final scenario explores a fictional company called “Stay” and shows an older person secretly committing to a deadbolt of themselves and paying for a 20-year subscription in the hopes it will comfort their adult children and allow their grandchildren to know them.
As the service kicks in after their death, one of their adult children refuses to engage and receives a barrage of emails in the voice of their dead parent as a result. Another does, but ends up emotionally exhausted. However, neither can suspend the deadbolt as that would violate the terms of the contract their parent signed with the service company.
“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations,” Hollanek said.
“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.”
Researchers said that providers of these services should have opt-out protocols allowing users to terminate their relationships with AI simulations in ways that provide emotional closure.
Your email address will not be published. Required fields are markedmarked