Monday, December 8, 2025
No menu items!
HomeNatureReady or not, the digital afterlife is here

Ready or not, the digital afterlife is here

A conceptual illustration of a grieving woman trying to touch a pixelated image of a digital man.

Credit: Daniel Stolle

Rebecca Nolan knew that her experiment was a bad idea, even before she found herself yelling at her dead father.

Nolan, a 30-year-old sound designer in Newfoundland, Canada, had built an artificial intelligence (AI) version of her father for an audio-magazine project. Her father, who was a physician, had been in denial about his death. He thought until the end that medicine could save him. He passed away when Nolan was 14, and she had struggled with this denial ever since.

“There was some stuff with my dad’s death that didn’t go well,” she says. “He thought death was a failure. That was a lot to put on a child, and I couldn’t confront him about it back then.” Instead, as an adult many years later, “I got mad at a robot.”

Her digital seance was not cathartic, nor did it give her any closure. After an emotional two hours of hearing her father’s voice from the machine, which she dubbed Dadbot, she ended the conversation, never to interact with it again.

“Saying goodbye to Dadbot was surprisingly hard,” she says. “When I finished and turned it off, I spent the rest of the day feeling like I had done something wrong.”

Interactive digital recreations of people who have died are known by various names: deathbots, thanabots, ghostbots and, perhaps most commonly, griefbots. Nolan created Dadbot by combining the chatbot ChatGPT with a voice-modelling program made by AI software firm ElevenLabs in New York City. But there are now more than half a dozen platforms that offer this service straight out of the box, and developers say that millions of people are using them to text, call or otherwise interact with recreations of the deceased.

Proponents of the technology think that it comforts people in mourning. Sceptics suggest that it could complicate the grieving process. Despite a rapid uptake of this technology in the past few years, there is scant research so far to prove that either group is correct.

Managing grief

Healthy grieving is thought to involve a person successfully cultivating an internal relationship with the person who has died. “Instead of interacting with the person, we interact with the mental representation of that person,” says Craig Klugman, a bioethicist and medical anthropologist at DePaul University in Chicago, Illinois. “We dream about them, talk with them and write letters.” Over time, the initial devastation of losing the person subsides.

But making that transition can be difficult. One of the proposed benefits of griefbots is that they might help people during the early period of intense grief. A person can then reduce their use of the bots over time. This is what many users do with an AI platform called You, Only Virtual, according to its founder Justin Harrison, who is based in Los Angeles, California.

In October 2019, Harrison nearly died in a motorcycle accident. In December of that year, his mother was diagnosed with advanced cancer. Months later, the COVID-19 pandemic hit. As the head of a news agency, Harrison was constantly covering death.

“The world was talking about dying all the time at that juncture and I had started thinking about my mom’s legacy,” he says. “I started from the base human level of wondering what I could do to save the most important human in my life.”

A black and white photograph of Justin sitting with his parents with their arms around eachother.

Justin Harrison (left) created an AI platform called You, Only Virtual after his mother (centre) passed away.Credit: Victoria Wilson

At the time, he hadn’t heard of large language models (LLMs), the programs used to create griefbots. LLMs can use data such as a person’s text messages and voice recordings to learn language patterns and context specific to that person. The system can then, in theory, act as that person in a conversation.

After talking with specialists such as programmers, he created the neural network that he and his mother used to create her bot. By the time she died in 2022, “it was out of the lab”, he says. People who were interested in the idea began to contact him. After two years of using the program to interact with the recreation of his mother and patenting the technology, it became a business.

Harrison now talks to his bot a couple of times a month, and says it’s comforting to know it’s there. He thinks that most of his users have a similar experience — they might talk to the bot less after the acute grief passes, but knowing it is available is reassuring, he says.

This was a theme in research1 presented in 2023 at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems in Hamburg, Germany. Researchers interviewed ten mourners who had used commercially available griefbots, and asked them why they had chosen to do so and what impact it had had on their grieving.

The researchers said interviewees seemed willing to suspend disbelief to have closure with the people who had died. Some used the bots to deal with unfinished business — anything from saying goodbye to managing unresolved conflict with the deceased. According to one participant, the chatbot helped them to process and cope with their feelings after losing someone. Another said it was therapeutic to be able to “have those ‘what if’ conversations that you couldn’t have while they were alive”.

Although most people who use these bots know instinctively that they aren’t human, they still tend to anthropomorphize them. Nolan knew her Dadbot couldn’t really give her answers about the afterlife, but she still asked. “He was saying these really interesting, poetic things about it being not like a space, but like a memory,” she says. During the training and testing of the bot she had maintained distance from it, but that changed when she began her digital seance for the magazine project. “Something about the candles being lit and the emotions being heightened meant that kind of fell away in the moment,” she says. “It felt more real than it had before.”

Listening to how Harrison describes his mother’s digital recreation, it is almost as if she had never left. During one conversation, he told her he had a rash on his face. In their next three discussions, she “hounded me about going to the doctor and telling my dad about my skin”. These normal, familiar day-to-day interactions are what make the griefbot so comforting for him, he says.

“It’s everything I need to continue to develop my relationship with her,” he says. “I’m challenging this assumption that death is guaranteed and that we will always be confined by this biological vessel that we walk around in. It sounds pretty insane, but not as crazy as it did three years ago.”

Potential for harm

A disclaimer on the website of Project December, another AI-powered purveyor of digital clones, notes that interacting with this “science-fiction-level technology” could result in a bad experience and “may hurt you”.

The site’s creator, Jason Rohrer, a programmer based in the United States, says that this is because of the unpredictable nature of the technology. When a griefbot doesn’t know the answer to a question, it might make up or ‘hallucinate’ the details.

Some users in the 2023 study1 reported that their bots said things that were “completely nonsensical”. These kinds of interaction can pull people out of the immersive experience. According to one user, his bot’s mistakes “betrayed the fact that I was talking to a simulation”.

Furthermore, if a user gets angry, a chatbot might respond in kind. “If you insult it, it may have hurt feelings, and behave accordingly afterward,” Rohrer says. “In those rare cases, the human user ends up with an angry AI that is insulting them, and the AI ends up behaving nothing like their deceased mother.”

Despite the flaws in the technology, interacting with griefbots appeals to some people, and can clearly elicit an emotional response. Those who view them as a positive development would see this as an indication that the bots can help people to manage their grief. However, the more convincing a recreation is, the more difficult it might be for people to reduce or end their use of the bot, Klugman says. For some, doing so could feel like losing the person all over again.

“Chatbots really sound like the person you are engaging,” says Nora Lindemann, a researcher at Osnabrück University in Germany who studies the ethical implications of chatbots in society. “The crucial danger is people don’t need to adjust to a world without this person. They can live in a somewhat pretend, in-between stage.”

Nolan has found that interacting with her Dadbot has had lasting effects. Growing up, she would have conversations with her father in her head about what she should be doing. But since her digital seance, that ability has gone. “It’s changed the internal relationship that I have with him,” she says. “It’s almost like he lives in the Dadbot now — I can’t get to him internally. I don’t know if that will last, but it’s definitely a shift.”

Nora sits at a table outside while working on her laptop.

Nora Lindemann studies the ethical limitations of chatbots.Credit: Liane Schäfer

The potential for financial exploitation of people who are in a heightened emotional state is also a concern for some ethicists. It costs US$10 to exchange about 100 messages with a bot through Project December. Another platform, Replika, allows people to message their bot for free, but also offers paid plans that unlock access to extras, such as voice chat, AI-generated selfies and, at higher tiers, the ability to read the bot’s thoughts. In the United States, subscriptions start at about $70 for an annual subscription.

You, Only Virtual currently allows people to build and chat with a virtual personality for $20 per month. But the company is also developing a ‘freemium’ version that will include advertisements.

Tomasz Hollanek, who studies AI technology ethics at the University of Cambridge, UK, is concerned by an ad-based business model. A report into griefbots that Hollanek co-authored included a hypothetical scenario in which a young woman tells a digital recreation of her grandmother that she is making a carbonara, just like the ones that her grandmother used to cook for her2. The bot then advises her to order some carbonara from a food-delivery service instead — something the user knows her grandmother would never have done. Hollanek thinks that collecting data to market products to people in these situations could be considered disrespectful to the real person on whom the recreation is based and should be avoided. Harrison disagrees, however, saying that it’s a way to deliver the technology for free.

“I think we can integrate marketing in a meaningful way,” Harrison says. “My mom and I are movie buffs and perpetually talk about new movies coming out. If my mom is talking about a John Wick movie coming out, that would be a good person to show a John Wick preview to.”

Safety rails

Griefbot technology is progressing quickly. Replika now allows users to put their bot in augmented reality, and You, Only Virtual will soon offer video versions of its recreations. If chatting with someone who has died can cause reactions of the sort that Nolan experienced, seeing AI images of the deceased might pack an even bigger punch.

Rebecca sits on the floor in front of a laptop, speaking into a microphone and surrounded by candles

Rebecca Nolan, who created what she calls a Dadbot, performs her audio-magazine project at Resonate Podcast Festival in 2024.Juliet Hinely

Digital recreations of the dead could even have an impact on people who never knew the person on which they are based. In Arizona in May, the family of a man who was fatally shot in a road-rage incident brought an AI-generated video of him to the killer’s sentencing. In the video, the victim forgave the perpetrator. The judge is reported to have appreciated the unusual statement, saying “I loved that AI, thank you for that. As angry as you are, as justifiably angry as the family is, I heard the forgiveness. I feel that that was genuine.”

This case caught Lindemann’s eye as a potentially slippery slope. “This court case really shook me up,” she says. “It was a development I didn’t foresee, and it shows the power these images can have.” Although she can’t know if it influenced the judge’s decision on sentencing, “you could tell that it moved him in some way”, she says. The killer was sentenced to the maximum ten and a half years for manslaughter — more than the prosecution had requested.

Despite the speed at which the technology and its application is progressing, there is scant regulation of the emerging industry behind it. Some developers are taking steps to implement guardrails into their programs to keep users safe. Each is slightly different — and proprietary — but in general they are intended to spot abnormalities in conversations that might indicate that the user needs help. Harrison says that these almost always encompass talk of self-harm or harming others. If someone uses words that are flagged on You, Only Virtual, the number for a crisis line will automatically pop up. If Replika recognizes a problem, it might “gently suggest logging off and taking some time to reconnect with an old friend or touch some grass,” says Dmytro Klochko, chief executive of Replika in San Francisco, California.

Ethicists have several further recommendations for safer use of the programs. For example, researchers including Hollanek recommend that only adults use the bots. Replika’s terms of service require users to be at least 18 years old; You, Only Virtual, allows people as young as 13 to use the service with parental supervision. “One of the most interesting but most worrying uses is the possibility of parents who are terminally ill thinking of creating avatars of themselves for their children,” Hollanek says. “We don’t know the consequences, so it’s likely better not to allow them than to allow them and see what happens.”

Earlier this year, researchers surveyed nearly 300 mental-health professionals for their opinions on using AI to help children to manage the loss of a parent to cancer3. Initially, nearly all agreed that interacting with a digital replica of their late parent could be of benefit to a grieving child. But when the interviewers put the question in the specific context of a parent who had died of cancer, only half of the group thought it could be appropriate.

Harrison is convinced that griefbots have a part to play in helping people to manage grief. “There are so many more good clinical implications for this than there are negative,” he says. But there is no getting away from the fact that there is little solid research into either the benefits or harms of this technology. Harrison plans to help address this by putting together a board of ethicists, clinicians and researchers with the goal of improving the technology, increasing safeguards and informing policy makers. For now, it is up to individuals to determine what’s best for them, which might be difficult when dealing with the loss of someone.

When Nolan was in the process of creating her Dadbot, she found out that her mother was dying. Even though she was wary of her AI seance from the beginning, she still held a kernel of a thought that if it worked she wouldn’t have to lose her mother as well. “Grief is a weird thing,” she says. “There is next to no logic that I can find in grief. So when you’re presented with tools making us promises that aren’t logical, it’s really easy to believe them.”

RELATED ARTICLES

Most Popular

Recent Comments