The impact of AI and robotics on unemployment
π Science and technology π Digital
Generative AI: threat or opportunity?

4 myths surrounding generative AI

with Thierry Rayna, Researcher at the CNRS i³-CRG* laboratory and Professor at Ecole Polytechnique (IP Paris) and Erwan Le Pennec, Professor at École Polytechnique (IP Paris)
On April 3rd, 2024 |
5 min reading time
Thierry Rayna
Thierry Rayna
Researcher at the CNRS i³-CRG* laboratory and Professor at Ecole Polytechnique (IP Paris)
Erwan le pennec
Erwan Le Pennec
Professor at École Polytechnique (IP Paris)
Key takeaways
  • Many myths and misconceptions surround AI, especially since the rise of generative AI such as DALL-E.
  • In reality, these types of AI do not represent a technological revolution, from an innovation point of view, since their existence predates the advent of ChatGPT.
  • What we are witnessing is a change in usage, thanks to the start-ups that have “opened up” access to A.I. for the general public.
  • In reality, the training protocols for these types of AI are kept secret by the companies, but programming interfaces give users the illusion of mastering the algorithm.
  • Despite concerns, this wide and open use of AI will make human expertise more necessary than ever.

AI repla­cing a law­yer, AI writ­ing essays which are so good they can fool teach­ers, AI put­ting artists out of work because any­one can gen­er­ate the cov­er of a magazine or com­pose the music for a film… These examples have been mak­ing the head­lines in recent months, not least the announce­ment of the immin­ent obsol­es­cence of intel­lec­tu­al pro­fes­sions and exec­ut­ives. How­ever, AI is not exactly innov­at­ive as it has been around for a very long time. From the mid-1950s onwards, waves of con­cern and fanci­ful spec­u­la­tion fol­lowed one anoth­er, each time with the same proph­ecy: that humans would be defin­it­ively replaced by machines. And yet, each time, these pre­dic­tions failed to mater­i­al­ise. How­ever, this time, as the use of these new AIs mul­ti­plies, is it legit­im­ate to believe that things might be different?

#1: we’re witnessing a technological revolution

Many com­ments and news reports sug­gest that a major tech­no­lo­gic­al break­through has just taken place. But this is simply not the case. The algorithms used by Chat­G­PT or DALL‑E resemble those already in use for a num­ber of years. If the innov­a­tion doesn’t lie in the algorithms, then per­haps there’s a major tech­no­lo­gic­al break­through that will make it pos­sible to pro­cess large quant­it­ies of data in a more “intel­li­gent” way? Not at all! The advances we’ve seen are the res­ult of a rel­at­ively con­tinu­ous and pre­dict­able pro­gres­sion. Even the much-dis­cussed gen­er­at­ive AI, i.e. the use of algorithms trained not to pre­dict the abso­lute right answer, but to gen­er­ate a vari­ety of pos­sible answers (hence the impres­sion of “cre­ativ­ity”), is not new either – even if improved res­ults are mak­ing it increas­ingly usable.

The para­dox is that Google can’t be as “good” as OpenAI, because Google can’t be as “bad” as OpenAI

What has happened in recent months is not a tech­no­lo­gic­al revolu­tion, but a revolu­tion in the way it is being used. Up until now, the AI giants (typ­ic­ally the GAFAMs) kept these tech­no­lo­gies to them­selves, thereby restrict­ing use by the gen­er­al pub­lic. The new­comers (OpenAI, Stable​.AI or Mid­jour­ney) have, on the con­trary, decided to let people do (almost) whatever they want with their algorithms. Hence­forth, any­one can appro­pri­ate these this “AI”, and use them for pur­poses as diverse as they are unpre­dict­able. It is from this open­ness that the “real” cre­ativ­ity of this new wave of AI stems.

#2: GAFAM (and other “Big Tech” companies) are technologically outdated

As explained above, big com­pan­ies such as Google, Apple and Face­book have also mastered these tech­no­lo­gies, but they restrict access to them. GAFAM keep tight con­trol of their AI, for two main reas­ons. Firstly, their image: if Chat­G­PT or DALL‑E gen­er­ates racist, dis­crim­in­at­ory, or insult­ing con­tent, the mis­step will be excused by their pos­i­tion as start-ups, still in the pro­cess of learn­ing. This “right to error” would not apply to Google, which would see its repu­ta­tion ser­i­ously tar­nished (not to men­tion the poten­tial leg­al issues). The para­dox is that Google (or any oth­er GAFAM) can’t be as “good” as OpenAI, because Google can’t be as “bad” as OpenAI.

Chat­G­PT: You can’t see the wood for the trees

Along­side the buzz gen­er­ated by Chat­G­PT, DALL‑E and Open​.AI, a far more rad­ic­al and less vis­ible devel­op­ment is under­way: the avail­ab­il­ity and wide­spread dis­tri­bu­tion of pre-trained AI mod­ules to the gen­er­al pub­lic. Unlike GPT, these are not depend­ent on a cent­ral­ised plat­form. They are autonom­ous, can be down­loaded and trained for a vari­ety of pur­poses (leg­al or oth­er­wise). They can even be integ­rated into soft­ware, apps, or oth­er ser­vices, and redis­trib­uted to oth­er users who can use this addi­tion­al learn­ing to train the mod­ules them­selves for oth­er pur­poses. Each time a pre-trained mod­ule is duplic­ated, trained and redis­trib­uted, a new vari­ant is cre­ated. Even­tu­ally, thou­sands or even mil­lions of vari­ants of an ini­tial mod­ule will spread across a stag­ger­ing num­ber of soft­ware pro­grams and applic­a­tions. And these AI mod­ules are all “black boxes”. They are not made up of expli­cit lines of com­puter code, but of matrices (often very large ones), intrins­ic­ally unin­ter­pretable, even by experts in the field. As a res­ult, it is almost impossible, in prac­tice, to accur­ately pre­dict the beha­viour of these AI sys­tems without test­ing them extensively.

The second reas­on is stra­tegic. Train­ing and devel­op­ing AI algorithms is incred­ibly expens­ive (we’re talk­ing mil­lions of dol­lars). This stag­ger­ing cost is an advant­age for the already well-estab­lished GAFAMs. Open­ing up access to their AI means giv­ing up this com­pet­it­ive advant­age. This situ­ation is para­dox­ic­al, giv­en that these same com­pan­ies have developed by lib­er­at­ing the use of tech­no­lo­gies (search engines, web plat­forms, e‑commerce and applic­a­tion SDKs), while oth­er estab­lished play­ers of the time kept them under tight con­trol. Now that this mar­ket is being explored by new play­ers, the GAFAMs are racing to offer the mar­ket their “Chat­G­PT” (hence the new ver­sion of Microsoft Bing with Copi­lot, and Google Gemini).

#3: OpenAI is open AI

Anoth­er myth that’s import­ant to dis­pel is the open­ness of start-up AI. The use of their tech­no­logy is, indeed, fairly widely open. For example, ChatGPT’s “GPT API” allows any­one (for a fee) to include quer­ies to the algorithms. But des­pite this access­ib­il­ity, A.I. remains closed: there’s no ques­tion here of open or col­lect­ive learn­ing. Updates and new learn­ing are car­ried out exclus­ively by Open​.AI. Most of these updates and pro­to­cols are kept secret by the start-ups.

If the train­ing of GPT (and its ilk) were open and col­lect­ive, we would undoubtedly see battles (using “bots”, for example) to influ­ence the learn­ing of the algorithm. Sim­il­arly, on Wiki­pe­dia, the col­lab­or­at­ive encyc­lo­pae­dia, there have been attempts for years to influ­ence what is presen­ted as the “col­lect­ive truth”. There is also the ques­tion of the right to use data.

Keep­ing AI sys­tems closed seems to make sense. But in real­ity, it raises the fun­da­ment­al ques­tion of the vera­city of con­tent. The qual­ity of the inform­a­tion is uncer­tain. Pos­sibly biased or par­tial, poor AI train­ing could lead to dan­ger­ous “beha­viour”. As the gen­er­al pub­lic is unable to assess these para­met­ers, the suc­cess of AI depends on the trust they place in com­pan­ies – as is already the case with search engines and oth­er “big tech” algorithms.

This “open” AI com­pletely redefines ques­tions of eth­ics, respons­ib­il­ity and reg­u­la­tion. These pre-trained mod­ules are easy to share and, unlike cent­ral­ised AI plat­forms like OpenAI’s GPT, are almost impossible to reg­u­late. Typ­ic­ally, in the event of an error, would we be able to determ­ine exactly which part of the learn­ing pro­cess was the cause? Is it the ini­tial learn­ing or one of the hun­dreds of sub­sequent learn­ing ses­sions? Was it the fact that the machine was trained by dif­fer­ent people?

#4: Many people will lose their jobs

Anoth­er myth sur­round­ing this “new AI” con­cerns its impact on employ­ment. Gen­er­at­ive AI, like older AI, is dis­crim­in­at­ive. As good as it may seem, this AI only replaces a com­pet­ent begin­ner (except that this begin­ner can­not learn!), but not the expert or the spe­cial­ist. But AI, how­ever good it may seem, will nev­er replace the expert. Chat­G­PT or DALL‑E can pro­duce very good “drafts”, but these still need to be checked, selec­ted and refined by the human.

With Chat­G­PT, what’s impress­ive is the assur­ance with which it responds. In real­ity, the intrins­ic qual­ity of the res­ults is debat­able. The explo­sion of inform­a­tion, con­tent and activ­ity that will res­ult from the wide and open use of AI will make human expert­ise more neces­sary than ever. Indeed, this has been the rule with the “digit­al revolu­tions”: the more we digit­ise, the more human expert­ise becomes neces­sary. How­ever, uncer­tainty remains as to how dis­rupt­ive this second wave of AI will be for businesses.

Support accurate information rooted in the scientific method.

Donate