Home / Chroniques / Will we live on in the form of virtual avatars?
The ghost in your computer is watching you, malware invading your system
π Digital π Society

Will we live on in the form of virtual avatars?

Laurence Devilliers
Laurence Devillers
Professor of Artificial Intelligence at Sorbonne University
Key takeaways
  • Recent advances in AI have taken the digital preservation of the dead to a new level.
  • Companies are making “virtual immortality” possible by offering deadbots that make it possible to chat artificially with a deceased person.
  • These virtual doppelgangers generate content using generative AI fed with all types of data created by the person before they died: recordings, messages, anecdotes, etc.
  • Despite advances in AI, these imperfect representations worry some professionals about the risks of anthropomorphism, attachment to the machine or isolation.
  • Users need to be educated about the risks and challenges of these tools, and the issue of data rights needs to be addressed to provide a framework for these practices.

What hap­pens to a person’s dig­i­tal data after they die? Much of it sur­vives in dig­i­tal space, such as the pro­files cre­at­ed on web­sites and social net­works. This gives rise to memo­r­i­al uses of the web, such as Face­book pages. For years, the plat­form has offered the pos­si­bil­i­ty of turn­ing a deceased person’s account into a memo­r­i­al page, allow­ing peo­ple to pay their respects and leave mes­sages, pho­tos and so on.

Today, the dig­i­tal preser­va­tion of the dead is tak­ing a new step for­ward with arti­fi­cial intel­li­gence. Sev­er­al com­pa­nies are now offer­ing to turn a person’s dig­i­tal lega­cy into a vir­tu­al avatar or “dead­bot” that can be used to com­mu­ni­cate with deceased loved ones, promis­ing a degree of vir­tu­al immor­tal­i­ty. Back in 2017, Microsoft had filed a patent, which was grant­ed four years lat­er, for the cre­ation of a con­ver­sa­tion­al agent based on a person’s data. The idea was to cre­ate a vir­tu­al dop­pel­ganger to bring deceased peo­ple back to life. “Peo­ple have always want­ed to be invin­ci­ble, immor­tal. It’s part of our found­ing myths – nobody wants to die. Then a vir­tu­al avatar of the deceased, a chat­bot or a robot, per per­son, is finan­cial­ly advan­ta­geous,” explains AI researcher and pro­fes­sor Lau­rence Devillers.

Since then, a whole new indus­try has sprung up. In 2018, James Vla­hos trained a chat­bot to speak in the man­ner of his father, who had died of can­cer. The Amer­i­can jour­nal­ist had col­lect­ed data, inter­viewed him, and record­ed his voice. James Vla­hos then co-found­ed the Here­After AI plat­form, described as an “inter­ac­tive mem­o­ry appli­ca­tion”. The aim is to col­lect a person’s sto­ries, mem­o­ries and record­ings while they are still alive, and talk to them vir­tu­al­ly after their death using a chat­bot. Many start-ups offer to cre­ate dig­i­tal dop­pel­gangers that live on after death. Deep­brain AI offers a ser­vice called Re;memory. For $10,000, it cre­ates a vir­tu­al avatar with the face, voice and expres­sion of the deceased, which rel­a­tives can view in a stu­dio. Som­ni­um Space wants to go even fur­ther, cre­at­ing a meta­verse in which users can immerse them­selves to vis­it the deceased.

Creating a virtual avatar from billions of data

These tech­nolo­gies are made pos­si­ble by rapid advances in gen­er­a­tive AI sys­tems. Con­ver­sa­tion­al agents, which detect speech, make seman­tic inter­pre­ta­tions and trig­ger respons­es based on what has been detect­ed, are com­mon on the inter­net. These “dead­bots” use bil­lions of pieces of data to gen­er­ate sen­tences and respond as if a per­son were speak­ing. In this way, a person’s voice record­ings, any e‑mails or text mes­sages they may have writ­ten, their tes­ti­mo­ni­als and their sto­ry are used to cre­ate a chat­bot, a sort of vir­tu­al avatar. “The machine learns reg­u­lar­i­ties in the deceased’s exist­ing data. Gen­er­a­tive AI makes it pos­si­ble to mod­el huge bod­ies of data that can then be adapt­ed to a per­son and a voice. The AI will search this large mod­el for infor­ma­tion relat­ed to the theme evoked by the user. In this way, the AI pro­duces words that the deceased might nev­er have uttered,” explains Lau­rence Devillers.

These algo­rithms will give the illu­sion of talk­ing to a deceased per­son. But the AI spe­cial­ist insists that this is just an illu­sion. The start-ups offer­ing these ser­vices present a kind of immor­tal­i­ty, or an exten­sion of the mem­o­ry of a deceased per­son, by repro­duc­ing their voice, their way of speak­ing and their appear­ance. How­ev­er, these “dead­bots” will remain imper­fect rep­re­sen­ta­tions of indi­vid­u­als. “With the cur­rent state of tech­nol­o­gy, we can reach a fair­ly high degree of imi­ta­tion, of resem­blance, no doubt in the voice, per­haps in the vocab­u­lary, but it won’t be per­fect. There will be hal­lu­ci­na­tions, the machine will inevitably make mis­takes and invent things to say”, warns the researcher.

It’s not nec­es­sar­i­ly pos­i­tive or neg­a­tive, but I think that as a soci­ety we’re not yet ready.

The machine works like a sta­tis­ti­cal mill. The AI cre­ates puz­zles based on the words spo­ken by the per­son. When there is no data, it can look at near­by data and bring out words that are not nec­es­sar­i­ly what the per­son would have said. What’s more, the AI will not adapt over time and in response to con­ver­sa­tions with the user. “The core of the mod­el is rich in dif­fer­ent con­texts, so we get the impres­sion that the machine will more or less adapt to us when we ask a ques­tion. In real­i­ty, it takes a his­to­ry of what we’ve said as we go along, enrich­ing it with our answers and the ques­tions we’ve asked. It’s get­ting more and more pre­cise. Tomor­row we may be able to have objects that adapt to us, but that’s not the case today,” says Lau­rence Devillers.

Significant risks for users

So it’s not real­ly a ques­tion of immor­tal­i­ty, but these “dead­bots” seem to be more like ways of bring­ing mem­o­ries to life, which can be con­sult­ed and inter­act­ed with. The devel­op­ers of these tech­nolo­gies claim that they can not only help us learn more about our ances­tors, but also help us to mourn. How­ev­er, it is far from cer­tain that these tools are whol­ly ben­e­fi­cial to their users. In its 2021 report, co-authored by Lau­rence Dev­illers, the French Nation­al Com­mit­tee for Dig­i­tal Ethics (CNPEN) was already point­ing out the risks of clas­sic chat­bots, such as those used on com­mer­cial web­sites. When users are not real­ly aware that they are talk­ing to robots, there is a risk of anthro­po­mor­phism or attach­ment to the machine. For Lau­rence Dev­illers, this dan­ger could be ampli­fied if the chat­bot uses the anec­dotes, expres­sions, voice or face of a deceased loved one. “This could length­en the mourn­ing process and per­pet­u­ate the lack and suf­fer­ing, because the object is there. It blurs the rela­tion­ship with the machine. And you can’t turn them off, because they rep­re­sent some­one you love”, she fears.

The risk is all the greater because the machine has no real rea­son­ing or morals. In the case of dead­bots, for exam­ple, the report points to a pos­si­ble “uncan­ny val­ley” effect for the user: either the chat­bot says some­thing offen­sive, or, after a sequence of famil­iar lines, it utters some­thing com­plete­ly dif­fer­ent from what the per­son being imi­tat­ed might have said. This effect could lead to a “rapid and painful psy­cho­log­i­cal change”, the authors fear. Lau­rence Dev­illers also points to the pos­si­bil­i­ty of addic­tion to these plat­forms, with a risk of indi­vid­ual with­draw­al and isolation.

The need for a collective consideration of these tools

Over and above con­cerns about the psy­cho­log­i­cal effects these tech­nolo­gies may have on users, there are ques­tions regard­ing data. To cre­ate these vir­tu­al avatars, AI sys­tems need a huge amount of data from the deceased. For the time being, the 2016 Law for a Dig­i­tal Repub­lic pro­vides for the pos­si­bil­i­ty of giv­ing instruc­tions on the reten­tion, dele­tion, or com­mu­ni­ca­tion of one’s data, and of des­ig­nat­ing anoth­er indi­vid­ual to car­ry them out. But while these dead­bots are mul­ti­ply­ing, the col­lec­tion, stor­age, and use of data from the deceased rais­es ques­tions: can chil­dren have rights over the data? Do the avatar and its data have an expiry date? Lau­rence Dev­illers explains that exist­ing plat­forms involve a con­tract between the man­u­fac­tur­er and the user, and that for the time being it is up to the user to ver­i­fy the future of their per­son­al data.

The dead­bot mar­ket is still in its infan­cy, and it is not yet cer­tain that users will make mas­sive use of these tools on a dai­ly basis. How­ev­er, vir­tu­al avatar ser­vices have been pro­lif­er­at­ing in recent years. With the devel­op­ment of con­nect­ed objects, these con­ver­sa­tion­al robots could become an inte­gral part of our lives. Lau­rence Dev­illers believes that a col­lec­tive debate on these tools is need­ed. “It’s not nec­es­sar­i­ly pos­i­tive or neg­a­tive, but I think that as a soci­ety we’re not yet ready,” she says. We need to edu­cate users, so that they under­stand the chal­lenges and risks of this arti­fi­cial world. Lau­rence Dev­illers also advo­cates the cre­ation of a com­mit­tee to estab­lish rules to gov­ern these prac­tices. “All this has an impact on soci­ety, so we urgent­ly need to give it some real thought, rather than leav­ing it to a few indus­tri­al­ists to decide,” she concludes. 

Sirine Azouaoui

Ref­er­ence:
Report by the French Nation­al Dig­i­tal Ethics Com­mit­tee on con­ver­sa­tion­al agents, 2021

Our world explained with science. Every week, in your inbox.

Get the newsletter