Following warnings by ethicists at Cambridge University that AI chatbots made to simulate the personalities of deceased loved ones could be used to spam family and friends, we take a look at the subject of so-called “deadbots”.
Griefbots, Deadbots, Postmortem Avatars
The Cambridge study, entitled “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” looks at the negative consequences and ethical concerns of adoption of generative AI solutions in what it calls “the digital afterlife industry (DAI)”.
Scenarios
As suggested by the title of the study, a ‘deadbot’ is a digital avatar or AI chatbot designed to simulate the personality and behaviour of a deceased individual. The Cambridge study used simulations and different scenarios to try and understand the effects that these AI clones trained on data about the deceased, known as “deadbots” or “griefbots”, could have on living loved ones if made to interact with them as part of this kind of service.
Who Could Make Deadbots and Why?
The research involved several scenarios designed to highlight the issues around the use of deadbots. For example, the possible negative uses of deadbots highlighted in the study included:
- A subscription app that can create a free AI re-creation of a deceased relative (a grandmother in the study), trained on their data, and which can exchange text messages with and contact the living loved one, in a similar way that the deceased relative used to (via WhatsApp) giving the impression that they are still around to talk to. The study scenario showed how the bot could be made to mimic the deceased loved one’s grandmother’s “accent and dialect when synthesising her voice, as well as her characteristic syntax and consistent typographical errors when texting”. However, the study showed how this deadbot service could also be made to output messages that include advertisements in the loved one’s voice, thereby causing the loved one distress. The study also looked at how further distress could be caused if the app designers did not fully consider the user’s feelings around deleting the account and the deadbot, such as if provision is not made to allow them to say goodbye to the deadbot in a meaningful way.
- A service allowing a dying relative (e.g. a father and grandfather), to create their own deadbot that will allow their younger relatives (i.e. children and grandchildren) to get to know them better after they’ve died. The study highlighted negative consequences of this type of service, such as the dying relative not getting consent from the children and grandchildren to be contacted by the ‘deadbolt’ and the resulting unsolicited notifications, reminders, and updates from the deadbot, leaving relatives distressed and feeling as though they were being ‘haunted’ or even ‘stalked’.
Examples of services and apps that already exist and offer to recreate the dead with AI include ‘Project December’, and apps like ‘HereAfter’.
Many Potential Issues
As shown by the examples in the Cambridge research (there were 3 main scenarios), the use of deadbots raise several ethical, psychological and social concerns. Some of the potential ways they could be harmful, unethical, or exploitative (along with the negative feelings they might provoke in loved ones) include concerns, such as:
- Consent and autonomy. As noted in the Cambridge study, a primary concern is whether the deceased gave consent for their personality, appearance, or private thoughts to be used in this way. Using someone’s identity without their explicit consent could be seen as a violation of their autonomy and dignity.
- Accuracy and representation: There is a risk that the AI might not accurately represent the deceased’s personality or views, potentially spreading misinformation or creating a false image that could tarnish their memory.
- Commercial exploitation. The study looked at how a deadbot could be used for advertising because the potential for commercial exploitation of a deceased person’s identity is a real concern. Companies could use deadbots for profit, exploiting a person’s image or personality without fair compensation to their estate or consideration of their legacy.
- Contractual issues. For example, relatives may find themselves in a situation where they are powerless to have an AI deadbot simulation suspended, e.g. if their deceased loved one signed a lengthy contract with a digital afterlife service.
Psychological and Social Impacts
The Cambridge study was designed to look at the possible negative aspects of the use of deadbots, an important part of which are the psychological and social impacts on the living. These could include, for example:
- Impeding grief. Interaction with a deadbot might impede the natural grieving process. Instead of coming to terms with the loss, people may cling to the digital semblance of the deceased, potentially leading to prolonged grief or complicated emotional states.
- There’s also a risk that individuals might become overly dependent on the deadbot for emotional support, isolating themselves from real human interactions and not seeking support from living friends and family.
- Distress and discomfort. As identified in the Cambridge study, aspects of the experience of interacting with a simulation of a deceased loved one can be distressing or unsettling for some people, especially if the interaction feels uncanny or not quite right. For example, the Cambridge study highlighted how relatives may get some initial comfort from the deadbot of a loved one but may become drained by daily interactions that become an “overwhelming emotional weight”.
Potential for Abuse
Considering the fact that, as identified in the Cambridge study, people may develop strong emotional bonds with the deadbot AI simulations thereby making them particularly vulnerable to manipulation, one of the major risks of the growth of a digital afterlife industry (DAI) is the potential for abuse. For example:
- There could be misuse of the deceased’s private information (privacy violations), especially if sensitive or personal data is incorporated into the deadbot without proper safeguards.
- In the wrong hands, deadbots could be used to harass or emotionally manipulate survivors, for example, by a controlling individual using a deadbot to exert influence beyond the grave.
- There is also the real potential for deadbots to be used in scams or fraudulent activities, impersonating the deceased to deceive the living.
Emotional Reactions from Loved Ones
The psychological and social impacts of the use of deadbots as part some kind of service to living loved ones, and/or misuse of deadbots could therefore lead to a number of negative emotional reactions. These could include :
- Distress due to the unsettling experience of interacting with a digital replica.
- Anger or frustration over the misuse or misrepresentation of the deceased.
- Sadness from a constant reminder of the loss that might hinder emotional recovery.
- Fear concerning the ethical implications and potential for misuse.
- Confusion over the blurred lines between reality and digital facsimiles.
What Do The Cambridge Researchers Suggest?
The Cambridge study led to several suggestions of ways in which users of this kind of service may be better protected from its negative effects, including:
- Deadbot designers being required to seek consent from “data donors” before they die.
- Products of this kind being required to regularly alert users about the risks and to provide easy opt-out protocols, as well as measures being taken to prevent the disrespectful uses of deadbots.
- The introduction of user-friendly termination methods, e.g. having a “digital funeral” for the deadbot. This would allow the living relative to say goodbye to the deadbot in a meaningful way if the account was to be closed and the deadbot deleted.
- As highlighted by Dr Tomasz Hollanek, one of the study co-authors: “It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations.”
What Does This Mean For Your Business?
The findings and recommendations from the Cambridge study shed light on crucial considerations that organisations involved in the digital afterlife industry (DAI) must address. As developers and businesses providing deadbot services, there is a heightened responsibility to ensure these technologies are developed and used ethically and sensitively. The study’s call for obtaining consent from data donors before their death underscores the need for clear consent mechanisms to be built in. This consent is not just a legal formality but a foundational ethical practice that respects the rights and dignity of individuals.
Also, the suggestion by the Cambridge team to implement regular risk notifications and provide straightforward opt-out options is needed for greater transparency and user control in digital interactions. This could mean incorporating these safeguards into service offerings to enhance user trust and digital afterlife services companies perhaps positioning themselves as a leaders in ethical AI practice. The introduction of a “digital funeral” to these services could also be a respectful and symbolic way to conclude the use of a deadbot, as well as being a sensitive way to meet personal closure needs, e.g. at the end of the contract.
The broader implications of the Cambridge study for the DAI sector include the need to navigate potential psychological impacts and prevent exploitative practices. As Dr Tomasz Hollanek from the study highlighted, the unintentional distress caused by these AI recreations can be profound, suggesting that their design and deployment strategies should really prioritise psychological safety and emotional wellbeing. This should involve designing AI that is not only technically proficient but also emotionally intelligent and sensitive to the nuances of human grief and memory.
Businesses in this field must also consider the long-term implications of their services on societal norms and personal privacy. The risk of commercial exploitation or disrespectful uses of deadbots could lead to public backlash and regulatory scrutiny, which could stifle innovation and growth in the industry. The Cambridge study, therefore serves as an early but important guidepost for the DAI industry and has highlighted some useful guidelines and recommendations that could contribute to a more ethical and empathetic digital world.
If you would like to discuss your technology requirements please:
- Email: hello@gmal.co.uk
- Visit our contact us page
- Or call 020 8778 7759
Back to Tech News