3_demystifier
π Science and technology π Digital
Generative AI: threat or opportunity?

Demystifying generative AI: true, false, uncertain

with Laure Soulier, Senior Lecturer at Sorbonne University in the "Machine Learning and Information Access" team
On February 7th, 2024 |
4 min reading time
Laure Soulier
Laure Soulier
Senior Lecturer at Sorbonne University in the "Machine Learning and Information Access" team
Key takeaways
  • Generative AI creates content, usually highly relevant and diverse (text, image, video), based on probabilities and deep language models.
  • Despite its performance, AI is neither comparable nor equivalent to human intelligence: rather than truth, it aims for believability.
  • The programme perpetuates the biases and errors of the dataset on which it has been trained.
  • This “work tool” is not expected to replace jobs on a massive scale but could even create new ones.
  • Its long-term development remains uncertain, but it will have to take account of environmental concerns and a move towards frugal AI.

#1 Generative AI: an intelligent revolution?

Generative AI, a different type of AI – TRUE

In the realm of arti­fi­cial intel­li­gence, there are a num­ber of dif­fer­ent types. Among these, gen­er­at­ive AI, as its name sug­gests, stands out for its abil­ity to gen­er­ate con­tent: text, images, video, etc. Some of the best-known cur­rent sys­tems are Chat­G­PT, Bard, Mid­jour­ney and DALL‑E.

Their prin­ciple is based on prob­ab­il­it­ies: they pre­dict the next word or the neigh­bour­ing pixel, accord­ing to what seems most likely. To do this, gen­er­at­ive AI relies on a large lan­guage mod­el, i.e. a deep net­work of arti­fi­cial neur­ons that has been trained on a vast amount of data. In this way, the soft­ware iden­ti­fies the most likely matches accord­ing to the context.

Generative AI is intelligent – FALSE

This is how gen­er­at­ive AI per­forms remark­ably well. They are able to estab­lish links between mul­tiple ele­ments, based on an enorm­ous volume of data. This is a com­plex pro­cess, involving a large num­ber of math­em­at­ic­al oper­a­tions, car­ried out very quickly.

But can we really call this “intel­li­gence”? While the res­ults can be aston­ish­ing, the way in which they are achieved has noth­ing to do with human intel­li­gence. Nor is it “gen­er­al AI”, cap­able of learn­ing any task per­formed by a human being. Today, gen­er­at­ive AI is more like a mul­ti­plic­a­tion of nar­row AIs, brought togeth­er with­in the same model.

Generative AI can do anything – UNCERTAIN

Gen­er­at­ive AI is cur­rently used in many fields: some use it to cre­ate music, oth­ers video game land­scapes… As for the lan­guage mod­els ini­tially used to cap­ture the semantics of words, they can now gen­er­ate text, answer ques­tions, trans­late con­tent, or even gen­er­ate code. But these tools still have their lim­it­a­tions, linked in par­tic­u­lar to the data­sets used dur­ing their train­ing. Cor­rel­a­tions iden­ti­fied at this stage can lead to errors at the gen­er­a­tion stage. In addi­tion, any biases encountered dur­ing the train­ing phase are reflec­ted in the res­ults. For example, a trans­la­tion sys­tem will tend to trans­late “the nurse” (in Eng­lish) as “l’infirmière” (in French, a fem­in­ine ver­sion of the word), because of the ste­reo­types asso­ci­ated with the profession.

Gen­er­at­ive AIs are not always very stable. Try it out with Chat­G­PT: ask the same ques­tion but vary the word­ing, and you’ll some­times get dif­fer­ent answers! These sys­tems are based on math­em­at­ic­al oper­a­tions that trans­form inform­a­tion into high-dimen­sion­al vec­tors, which makes them dif­fi­cult to explain. Research is cur­rently under­way on this subject.

#2 Should we be wary of generative AI?

Generative AI can be wrong – TRUE

It is import­ant to bear in mind that gen­er­at­ive AI does not aim to deliv­er the truth, but to max­im­ise plaus­ib­il­ity, based on its train­ing data. It some­times pro­duces false cor­rel­a­tions between words. What’s more, if the train­ing data con­tains errors or biases, the sys­tem will inev­it­ably repro­duce them. In any case, it does not seek to know wheth­er the inform­a­tion provided is accur­ate or sourced! This leads to the fre­quent and unpre­dict­able appear­ance of “hal­lu­cin­a­tions”, i.e. incor­rect responses or inco­her­ent images.

For example, accord­ing to a study by the Uni­ver­sity of Hong Kong1, Chat­G­PT (ver­sion GPT‑3.5) has an accur­acy rate of 64%. Would you take the word of someone who has more than a one in three chance of being wrong?

Generative AI will rebel and take over – FALSE

As soon as arti­fi­cial intel­li­gence seems to reach a new stage, fantas­ies of machine upris­ings, influ­enced by sci­ence fic­tion, resur­face. We shouldn’t indulge in excess­ive anthro­po­morph­ism: gen­er­at­ive AI simply pre­dicts prob­ab­il­it­ies – in admit­tedly com­plex ways. They do not feel emo­tions, nor do they have con­scious­ness. So, they can­not have a “will” to rebel.

In 2015, the Amer­ic­an AI research­er Andrew Ng2 said that fear­ing a pos­sible AI revolt was like “wor­ry­ing about over­pop­u­la­tion on Mars”, when “we’ve nev­er set foot on the plan­et before.” Even if the tech­no­logy has evolved con­sid­er­ably in recent years, the com­par­is­on still rings true!

Generative AI raises security and confidentiality issues – UNCERTAIN

Today, we need to be aware that most gen­er­at­ive AI mod­els are hos­ted on Amer­ic­an serv­ers. Under the Pat­ri­ot Act and the Cloud Act, the data sent can be retrieved by the Amer­ic­an author­it­ies. In addi­tion, the data sup­plied to these gen­er­at­ive AIs is undoubtedly reused to improve the mod­els, mak­ing it pos­sible to retrieve this data in future quer­ies. This can there­fore rep­res­ent a risk, par­tic­u­larly for busi­nesses, whose data secur­ity and con­fid­en­ti­al­ity are under threat. There are, how­ever, host­ing solu­tions with ded­ic­ated, closed spaces, or open-source gen­er­at­ive AI altern­at­ives that can be installed on loc­al servers.

How­ever, as is often the case, reg­u­la­tions even­tu­ally adapt to the new tech­no­lo­gic­al con­text. For example, at the end of 2023, the Coun­cil of the European Uni­on and the European Par­lia­ment reached agree­ment on legis­la­tion on arti­fi­cial intel­li­gence (AI Act)3. The text will undoubtedly be refined, but it will provide a bet­ter frame­work for the use of AI, in com­pli­ance with European law (includ­ing the RGPD).

#3 Generative AI: an assistant or a threat to workers?

Generative AI can replace humans for certain tasks – TRUE

The cap­ab­il­it­ies of gen­er­at­ive AI make it very use­ful in the pro­fes­sion­al sphere. It can draft con­tent, write lines of code, draw up a train­ing plan, etc. But what it pro­duces gen­er­ally requires the human eye to check its accur­acy, per­son­al­ise the mes­sage, add a more sens­it­ive touch, etc. It is there­fore a tool for increas­ing pro­ductiv­ity, free­ing up time to work differently.

Some jobs could dis­ap­pear, how­ever, for lack of suf­fi­cient added value. But isn’t that always the case with tech­nic­al pro­gress? Didn’t lamp­light­ers, for example, dis­ap­pear with the advent of elec­tric lighting?

Generative AI will put millions out of work – FALSE

But let’s remain meas­ured about the pro­fes­sion­al con­sequences of gen­er­at­ive AI. At the end of the day, it’s just a new tool – a very use­ful one – at the ser­vice of human beings. And changes in the labour mar­ket depend on many para­met­ers… Have auto­mat­ic check­outs com­pletely replaced the need for cashiers? Has e‑learning replaced schools and teachers?

What’s more, the rise of gen­er­at­ive AI is likely to be accom­pan­ied by new pro­fes­sions, such as prompt engin­eer­ing, a dis­cip­line that aims to optim­ise the quer­ies for­mu­lated to the AI, so as to obtain the best pos­sible res­ults. Accord­ing to the Inter­na­tion­al Labour Organ­isa­tion (ILO)4, “gen­er­at­ive AI is more likely to increase than des­troy jobs by auto­mat­ing cer­tain tasks rather than repla­cing a role entirely”.

How far will generative AI go? – UNCERTAIN

What will hap­pen in the long term? How will gen­er­at­ive AI evolve? Pre­dict­ing its future is tricky: who could have pre­dicted the cur­rent situ­ation a few years ago? Nev­er­the­less, cer­tain trends are emer­ging, such as the hybrid­isa­tion of sys­tems. For example, RAG (retriev­al-aug­men­ted gen­er­a­tion) involves com­bin­ing gen­er­at­ive AI with a search engine to improve the rel­ev­ance of res­ults and lim­it hallucinations.

Finally, gen­er­at­ive AI can­not devel­op without address­ing its eco­lo­gic­al foot­print. Its mod­els require an enorm­ous amount of data and com­put­ing power. A new approach is already being explored to optim­ise the neces­sary resources: frugal AI.

Bastien Contreras
1https://​arx​iv​.org/​p​d​f​/​2​3​0​2​.​1​2​0​9​5.pdf
2https://​www​.wired​.com/​b​r​a​n​d​l​a​b​/​2​0​1​5​/​0​5​/​a​n​d​r​e​w​-​n​g​-​d​e​e​p​-​l​e​a​r​n​i​n​g​-​m​a​n​d​a​t​e​-​h​u​m​a​n​s​-​n​o​t​-​j​u​s​t​-​m​a​c​h​ines/
3https://​www​.con​sili​um​.europa​.eu/​f​r​/​p​r​e​s​s​/​p​r​e​s​s​-​r​e​l​e​a​s​e​s​/​2​0​2​3​/​1​2​/​0​9​/​a​r​t​i​f​i​c​i​a​l​-​i​n​t​e​l​l​i​g​e​n​c​e​-​a​c​t​-​c​o​u​n​c​i​l​-​a​n​d​-​p​a​r​l​i​a​m​e​n​t​-​s​t​r​i​k​e​-​a​-​d​e​a​l​-​o​n​-​t​h​e​-​f​i​r​s​t​-​w​o​r​l​d​w​i​d​e​-​r​u​l​e​s​-​f​o​r-ai/
4https://​news​.un​.org/​f​r​/​s​t​o​r​y​/​2​0​2​3​/​0​8​/​1​1​37832

Support accurate information rooted in the scientific method.

Donate