Home / Chroniques / Will we live on in the form of virtual avatars?
The ghost in your computer is watching you, malware invading your system
π Digital π Society

Will we live on in the form of virtual avatars?

Laurence Devilliers
Laurence Devillers
Professor of Artificial Intelligence at Sorbonne University
Key takeaways
  • Recent advances in AI have taken the digital preservation of the dead to a new level.
  • Companies are making “virtual immortality” possible by offering deadbots that make it possible to chat artificially with a deceased person.
  • These virtual doppelgangers generate content using generative AI fed with all types of data created by the person before they died: recordings, messages, anecdotes, etc.
  • Despite advances in AI, these imperfect representations worry some professionals about the risks of anthropomorphism, attachment to the machine or isolation.
  • Users need to be educated about the risks and challenges of these tools, and the issue of data rights needs to be addressed to provide a framework for these practices.

What hap­pens to a person’s digit­al data after they die? Much of it sur­vives in digit­al space, such as the pro­files cre­ated on web­sites and social net­works. This gives rise to memori­al uses of the web, such as Face­book pages. For years, the plat­form has offered the pos­sib­il­ity of turn­ing a deceased person’s account into a memori­al page, allow­ing people to pay their respects and leave mes­sages, pho­tos and so on.

Today, the digit­al pre­ser­va­tion of the dead is tak­ing a new step for­ward with arti­fi­cial intel­li­gence. Sev­er­al com­pan­ies are now offer­ing to turn a person’s digit­al leg­acy into a vir­tu­al avatar or “dead­bot” that can be used to com­mu­nic­ate with deceased loved ones, prom­ising a degree of vir­tu­al immor­tal­ity. Back in 2017, Microsoft had filed a pat­ent, which was gran­ted four years later, for the cre­ation of a con­ver­sa­tion­al agent based on a person’s data. The idea was to cre­ate a vir­tu­al dop­pel­gang­er to bring deceased people back to life. “People have always wanted to be invin­cible, immor­tal. It’s part of our found­ing myths – nobody wants to die. Then a vir­tu­al avatar of the deceased, a chat­bot or a robot, per per­son, is fin­an­cially advant­age­ous,” explains AI research­er and pro­fess­or Laurence Devillers.

Since then, a whole new industry has sprung up. In 2018, James Vlahos trained a chat­bot to speak in the man­ner of his fath­er, who had died of can­cer. The Amer­ic­an journ­al­ist had col­lec­ted data, inter­viewed him, and recor­ded his voice. James Vlahos then co-foun­ded the Here­After AI plat­form, described as an “inter­act­ive memory applic­a­tion”. The aim is to col­lect a person’s stor­ies, memor­ies and record­ings while they are still alive, and talk to them vir­tu­ally after their death using a chat­bot. Many start-ups offer to cre­ate digit­al dop­pel­gang­ers that live on after death. Deep­brain AI offers a ser­vice called Re;memory. For $10,000, it cre­ates a vir­tu­al avatar with the face, voice and expres­sion of the deceased, which rel­at­ives can view in a stu­dio. Som­ni­um Space wants to go even fur­ther, cre­at­ing a meta­verse in which users can immerse them­selves to vis­it the deceased.

Creating a virtual avatar from billions of data

These tech­no­lo­gies are made pos­sible by rap­id advances in gen­er­at­ive AI sys­tems. Con­ver­sa­tion­al agents, which detect speech, make semant­ic inter­pret­a­tions and trig­ger responses based on what has been detec­ted, are com­mon on the inter­net. These “dead­bots” use bil­lions of pieces of data to gen­er­ate sen­tences and respond as if a per­son were speak­ing. In this way, a person’s voice record­ings, any e‑mails or text mes­sages they may have writ­ten, their testi­mo­ni­als and their story are used to cre­ate a chat­bot, a sort of vir­tu­al avatar. “The machine learns reg­u­lar­it­ies in the deceased’s exist­ing data. Gen­er­at­ive AI makes it pos­sible to mod­el huge bod­ies of data that can then be adap­ted to a per­son and a voice. The AI will search this large mod­el for inform­a­tion related to the theme evoked by the user. In this way, the AI pro­duces words that the deceased might nev­er have uttered,” explains Laurence Devillers.

These algorithms will give the illu­sion of talk­ing to a deceased per­son. But the AI spe­cial­ist insists that this is just an illu­sion. The start-ups offer­ing these ser­vices present a kind of immor­tal­ity, or an exten­sion of the memory of a deceased per­son, by repro­du­cing their voice, their way of speak­ing and their appear­ance. How­ever, these “dead­bots” will remain imper­fect rep­res­ent­a­tions of indi­vidu­als. “With the cur­rent state of tech­no­logy, we can reach a fairly high degree of imit­a­tion, of resemb­lance, no doubt in the voice, per­haps in the vocab­u­lary, but it won’t be per­fect. There will be hal­lu­cin­a­tions, the machine will inev­it­ably make mis­takes and invent things to say”, warns the researcher.

It’s not neces­sar­ily pos­it­ive or neg­at­ive, but I think that as a soci­ety we’re not yet ready.

The machine works like a stat­ist­ic­al mill. The AI cre­ates puzzles based on the words spoken by the per­son. When there is no data, it can look at nearby data and bring out words that are not neces­sar­ily what the per­son would have said. What’s more, the AI will not adapt over time and in response to con­ver­sa­tions with the user. “The core of the mod­el is rich in dif­fer­ent con­texts, so we get the impres­sion that the machine will more or less adapt to us when we ask a ques­tion. In real­ity, it takes a his­tory of what we’ve said as we go along, enrich­ing it with our answers and the ques­tions we’ve asked. It’s get­ting more and more pre­cise. Tomor­row we may be able to have objects that adapt to us, but that’s not the case today,” says Laurence Devillers.

Significant risks for users

So it’s not really a ques­tion of immor­tal­ity, but these “dead­bots” seem to be more like ways of bring­ing memor­ies to life, which can be con­sul­ted and inter­ac­ted with. The developers of these tech­no­lo­gies claim that they can not only help us learn more about our ancest­ors, but also help us to mourn. How­ever, it is far from cer­tain that these tools are wholly bene­fi­cial to their users. In its 2021 report, co-authored by Laurence Dev­illers, the French Nation­al Com­mit­tee for Digit­al Eth­ics (CNPEN) was already point­ing out the risks of clas­sic chat­bots, such as those used on com­mer­cial web­sites. When users are not really aware that they are talk­ing to robots, there is a risk of anthro­po­morph­ism or attach­ment to the machine. For Laurence Dev­illers, this danger could be amp­li­fied if the chat­bot uses the anec­dotes, expres­sions, voice or face of a deceased loved one. “This could lengthen the mourn­ing pro­cess and per­petu­ate the lack and suf­fer­ing, because the object is there. It blurs the rela­tion­ship with the machine. And you can’t turn them off, because they rep­res­ent someone you love”, she fears.

The risk is all the great­er because the machine has no real reas­on­ing or mor­als. In the case of dead­bots, for example, the report points to a pos­sible “uncanny val­ley” effect for the user: either the chat­bot says some­thing offens­ive, or, after a sequence of famil­i­ar lines, it utters some­thing com­pletely dif­fer­ent from what the per­son being imit­ated might have said. This effect could lead to a “rap­id and pain­ful psy­cho­lo­gic­al change”, the authors fear. Laurence Dev­illers also points to the pos­sib­il­ity of addic­tion to these plat­forms, with a risk of indi­vidu­al with­draw­al and isolation.

The need for a collective consideration of these tools

Over and above con­cerns about the psy­cho­lo­gic­al effects these tech­no­lo­gies may have on users, there are ques­tions regard­ing data. To cre­ate these vir­tu­al avatars, AI sys­tems need a huge amount of data from the deceased. For the time being, the 2016 Law for a Digit­al Repub­lic provides for the pos­sib­il­ity of giv­ing instruc­tions on the reten­tion, dele­tion, or com­mu­nic­a­tion of one’s data, and of des­ig­nat­ing anoth­er indi­vidu­al to carry them out. But while these dead­bots are mul­tiply­ing, the col­lec­tion, stor­age, and use of data from the deceased raises ques­tions: can chil­dren have rights over the data? Do the avatar and its data have an expiry date? Laurence Dev­illers explains that exist­ing plat­forms involve a con­tract between the man­u­fac­turer and the user, and that for the time being it is up to the user to veri­fy the future of their per­son­al data.

The dead­bot mar­ket is still in its infancy, and it is not yet cer­tain that users will make massive use of these tools on a daily basis. How­ever, vir­tu­al avatar ser­vices have been pro­lif­er­at­ing in recent years. With the devel­op­ment of con­nec­ted objects, these con­ver­sa­tion­al robots could become an integ­ral part of our lives. Laurence Dev­illers believes that a col­lect­ive debate on these tools is needed. “It’s not neces­sar­ily pos­it­ive or neg­at­ive, but I think that as a soci­ety we’re not yet ready,” she says. We need to edu­cate users, so that they under­stand the chal­lenges and risks of this arti­fi­cial world. Laurence Dev­illers also advoc­ates the cre­ation of a com­mit­tee to estab­lish rules to gov­ern these prac­tices. “All this has an impact on soci­ety, so we urgently need to give it some real thought, rather than leav­ing it to a few indus­tri­al­ists to decide,” she concludes. 

Sirine Azouaoui

Ref­er­ence:
Report by the French Nation­al Digit­al Eth­ics Com­mit­tee on con­ver­sa­tion­al agents, 2021

Support accurate information rooted in the scientific method.

Donate