Generative artificial intelligence is making it possible for people to create interactive “digital twins” of themselves that loved ones can communicate with after their death – but the emerging and rapidly growing AI afterlife industry is also raising difficult legal and ethical questions.
Known as “griefbots” or “deathbots”, AI-generated voice avatars, video likenesses or text-based chatbots trained on a person’s data are proliferating as part of the dark yet booming grief tech sector. While many are created by bereaved relatives attempting to cope with loss, some companies now allow users to build their own digital replicas while still alive.
The concept raises a provocative question. Would you create an interactive “digital twin” of yourself that can communicate with loved ones after your death?
As with many emerging uses of artificial intelligence, the promise of digital immortality is outpacing legal clarity.
To build a digital twin, users typically sign up for a service that collects extensive personal data. Participants answer detailed questions about their identity, beliefs and experiences, record memories and stories in their own voice, and upload images or video capturing their visual likeness.
Using this training data, AI software generates a digital replica capable of interacting with others. Once the company is notified of a user’s death, loved ones may be able to communicate with the simulation.
In doing so, users effectively delegate agency to a private company to create an ongoing AI representation of themselves after death.
This differs from cases in which AI is used to “resurrect” a deceased person without their consent. Instead, in this case a living person is licensing data about themselves in advance, entering into a contractual arrangement that allows posthumous use of AI-generated outputs.
However, questions are yet to be answered around copyright ownership, privacy protections, data security and business continuity. If the technology becomes obsolete or the provider shuts down, it is unclear what happens to the stored data – or to the digital twin itself. The potential disappearance of such simulations could also trigger renewed grief for users’ families.
Under current Australian law, a person’s identity, including their voice, likeness, personality or values, is not protected as a standalone legal right. Unlike in the United States, there is no general publicity or personality right.
As a result, Australians generally have no proprietary right to control how aspects of their identity are used.
Copyright law also offers limited protection. While recorded voice clips, written responses or images supplied to train an AI system may qualify as material works capable of protection, the broader concept of a person’s “presence” or “self” is considered abstract.
Fully autonomous AI-generated outputs are also unlikely to attract copyright. Because they are produced by machines rather than through the “independent intellectual effort” of a human author, such content may be treated as authorless under existing law.
Moral rights, which protect creators against false attribution or derogatory treatment of their work, similarly do not extend to AI-generated digital twins.
Instead, ownership and usage rights are often determined by companies’ terms and conditions. Providers may assert control over AI-generated data, grant users limited rights to outputs or reserve broad permissions to reuse personal information.
Legal experts say this makes it essential for users to carefully review contractual arrangements before signing up.
Beyond legal uncertainties, AI-enabled grief technologies are raising ethical concerns.
Although a digital twin’s training data may be locked after a person’s death, others will continue interacting with the simulation in the future. Critics warn the technology could misrepresent a deceased individual’s morals or beliefs.
Because AI systems are probabilistic, there is also the possibility of something known as drift – whereby gradual distortion in responses over time can reduce the simulation’s resemblance to the original person. It remains unclear what recourse families may have if such changes cause distress.
While some users report that griefbots can help them cope with loss, evidence remains largely anecdotal and more research is needed. Mental health professionals have also cautioned that reliance on AI simulations could potentially hinder healthy grieving processes.
If AI-powered interactions cause emotional harm, questions arise about responsibility – whether it lies with technology providers, regulators or users themselves.
The rapid expansion of the grief tech industry has prompted growing calls for clearer legal frameworks.
Even where individuals consent to the creation of a digital twin, future technological developments could transform how their data is used in ways that are difficult to anticipate.
For now, experts emphasise that anyone considering an AI afterlife service should approach the decision cautiously – and read the fine print.
After all, they will be bound by the contract they sign.