The impact of AI and robotics on unemployment
π Science and technology π Digital
Generative AI: threat or opportunity?

4 myths surrounding generative AI

Thierry Rayna, Researcher at the CNRS i³-CRG* laboratory and Professor at École Polytechnique (IP Paris) and Erwan Le Pennec, Professor at École Polytechnique (IP Paris)
On April 3rd, 2024 |
5 min reading time
Thierry Rayna
Thierry Rayna
Researcher at the CNRS i³-CRG* laboratory and Professor at École Polytechnique (IP Paris)
Erwan le pennec
Erwan Le Pennec
Professor at École Polytechnique (IP Paris)
Key takeaways
  • Many myths and misconceptions surround AI, especially since the rise of generative AI such as DALL-E.
  • In reality, these types of AI do not represent a technological revolution, from an innovation point of view, since their existence predates the advent of ChatGPT.
  • What we are witnessing is a change in usage, thanks to the start-ups that have “opened up” access to A.I. for the general public.
  • In reality, the training protocols for these types of AI are kept secret by the companies, but programming interfaces give users the illusion of mastering the algorithm.
  • Despite concerns, this wide and open use of AI will make human expertise more necessary than ever.

AI replac­ing a lawyer, AI writ­ing essays which are so good they can fool teach­ers, AI putting artists out of work because any­one can gen­er­ate the cov­er of a mag­a­zine or com­pose the music for a film… These exam­ples have been mak­ing the head­lines in recent months, not least the announce­ment of the immi­nent obso­les­cence of intel­lec­tu­al pro­fes­sions and exec­u­tives. How­ev­er, AI is not exact­ly inno­v­a­tive as it has been around for a very long time. From the mid-1950s onwards, waves of con­cern and fan­ci­ful spec­u­la­tion fol­lowed one anoth­er, each time with the same prophe­cy: that humans would be defin­i­tive­ly replaced by machines. And yet, each time, these pre­dic­tions failed to mate­ri­alise. How­ev­er, this time, as the use of these new AIs mul­ti­plies, is it legit­i­mate to believe that things might be different?

#1: we’re witnessing a technological revolution

Many com­ments and news reports sug­gest that a major tech­no­log­i­cal break­through has just tak­en place. But this is sim­ply not the case. The algo­rithms used by Chat­G­PT or DALL‑E resem­ble those already in use for a num­ber of years. If the inno­va­tion doesn’t lie in the algo­rithms, then per­haps there’s a major tech­no­log­i­cal break­through that will make it pos­si­ble to process large quan­ti­ties of data in a more “intel­li­gent” way? Not at all! The advances we’ve seen are the result of a rel­a­tive­ly con­tin­u­ous and pre­dictable pro­gres­sion. Even the much-dis­cussed gen­er­a­tive AI, i.e. the use of algo­rithms trained not to pre­dict the absolute right answer, but to gen­er­ate a vari­ety of pos­si­ble answers (hence the impres­sion of “cre­ativ­i­ty”), is not new either – even if improved results are mak­ing it increas­ing­ly usable.

The para­dox is that Google can’t be as “good” as Ope­nAI, because Google can’t be as “bad” as OpenAI

What has hap­pened in recent months is not a tech­no­log­i­cal rev­o­lu­tion, but a rev­o­lu­tion in the way it is being used. Up until now, the AI giants (typ­i­cal­ly the GAFAMs) kept these tech­nolo­gies to them­selves, there­by restrict­ing use by the gen­er­al pub­lic. The new­com­ers (Ope­nAI, Sta​ble​.AI or Mid­jour­ney) have, on the con­trary, decid­ed to let peo­ple do (almost) what­ev­er they want with their algo­rithms. Hence­forth, any­one can appro­pri­ate these this “AI”, and use them for pur­pos­es as diverse as they are unpre­dictable. It is from this open­ness that the “real” cre­ativ­i­ty of this new wave of AI stems.

#2: GAFAM (and other “Big Tech” companies) are technologically outdated

As explained above, big com­pa­nies such as Google, Apple and Face­book have also mas­tered these tech­nolo­gies, but they restrict access to them. GAFAM keep tight con­trol of their AI, for two main rea­sons. First­ly, their image: if Chat­G­PT or DALL‑E gen­er­ates racist, dis­crim­i­na­to­ry, or insult­ing con­tent, the mis­step will be excused by their posi­tion as start-ups, still in the process of learn­ing. This “right to error” would not apply to Google, which would see its rep­u­ta­tion seri­ous­ly tar­nished (not to men­tion the poten­tial legal issues). The para­dox is that Google (or any oth­er GAFAM) can’t be as “good” as Ope­nAI, because Google can’t be as “bad” as OpenAI.

Chat­G­PT: You can’t see the wood for the trees

Along­side the buzz gen­er­at­ed by Chat­G­PT, DALL‑E and Open​.AI, a far more rad­i­cal and less vis­i­ble devel­op­ment is under­way: the avail­abil­i­ty and wide­spread dis­tri­b­u­tion of pre-trained AI mod­ules to the gen­er­al pub­lic. Unlike GPT, these are not depen­dent on a cen­tralised plat­form. They are autonomous, can be down­loaded and trained for a vari­ety of pur­pos­es (legal or oth­er­wise). They can even be inte­grat­ed into soft­ware, apps, or oth­er ser­vices, and redis­trib­uted to oth­er users who can use this addi­tion­al learn­ing to train the mod­ules them­selves for oth­er pur­pos­es. Each time a pre-trained mod­ule is dupli­cat­ed, trained and redis­trib­uted, a new vari­ant is cre­at­ed. Even­tu­al­ly, thou­sands or even mil­lions of vari­ants of an ini­tial mod­ule will spread across a stag­ger­ing num­ber of soft­ware pro­grams and appli­ca­tions. And these AI mod­ules are all “black box­es”. They are not made up of explic­it lines of com­put­er code, but of matri­ces (often very large ones), intrin­si­cal­ly unin­ter­pretable, even by experts in the field. As a result, it is almost impos­si­ble, in prac­tice, to accu­rate­ly pre­dict the behav­iour of these AI sys­tems with­out test­ing them extensively.

The sec­ond rea­son is strate­gic. Train­ing and devel­op­ing AI algo­rithms is incred­i­bly expen­sive (we’re talk­ing mil­lions of dol­lars). This stag­ger­ing cost is an advan­tage for the already well-estab­lished GAFAMs. Open­ing up access to their AI means giv­ing up this com­pet­i­tive advan­tage. This sit­u­a­tion is para­dox­i­cal, giv­en that these same com­pa­nies have devel­oped by lib­er­at­ing the use of tech­nolo­gies (search engines, web plat­forms, e‑commerce and appli­ca­tion SDKs), while oth­er estab­lished play­ers of the time kept them under tight con­trol. Now that this mar­ket is being explored by new play­ers, the GAFAMs are rac­ing to offer the mar­ket their “Chat­G­PT” (hence the new ver­sion of Microsoft Bing with Copi­lot, and Google Gemini).

#3: OpenAI is open AI

Anoth­er myth that’s impor­tant to dis­pel is the open­ness of start-up AI. The use of their tech­nol­o­gy is, indeed, fair­ly wide­ly open. For exam­ple, ChatGPT’s “GPT API” allows any­one (for a fee) to include queries to the algo­rithms. But despite this acces­si­bil­i­ty, A.I. remains closed: there’s no ques­tion here of open or col­lec­tive learn­ing. Updates and new learn­ing are car­ried out exclu­sive­ly by Open​.AI. Most of these updates and pro­to­cols are kept secret by the start-ups.

If the train­ing of GPT (and its ilk) were open and col­lec­tive, we would undoubt­ed­ly see bat­tles (using “bots”, for exam­ple) to influ­ence the learn­ing of the algo­rithm. Sim­i­lar­ly, on Wikipedia, the col­lab­o­ra­tive ency­clopae­dia, there have been attempts for years to influ­ence what is pre­sent­ed as the “col­lec­tive truth”. There is also the ques­tion of the right to use data.

Keep­ing AI sys­tems closed seems to make sense. But in real­i­ty, it rais­es the fun­da­men­tal ques­tion of the verac­i­ty of con­tent. The qual­i­ty of the infor­ma­tion is uncer­tain. Pos­si­bly biased or par­tial, poor AI train­ing could lead to dan­ger­ous “behav­iour”. As the gen­er­al pub­lic is unable to assess these para­me­ters, the suc­cess of AI depends on the trust they place in com­pa­nies – as is already the case with search engines and oth­er “big tech” algorithms.

This “open” AI com­plete­ly rede­fines ques­tions of ethics, respon­si­bil­i­ty and reg­u­la­tion. These pre-trained mod­ules are easy to share and, unlike cen­tralised AI plat­forms like OpenAI’s GPT, are almost impos­si­ble to reg­u­late. Typ­i­cal­ly, in the event of an error, would we be able to deter­mine exact­ly which part of the learn­ing process was the cause? Is it the ini­tial learn­ing or one of the hun­dreds of sub­se­quent learn­ing ses­sions? Was it the fact that the machine was trained by dif­fer­ent people?

#4: Many people will lose their jobs

Anoth­er myth sur­round­ing this “new AI” con­cerns its impact on employ­ment. Gen­er­a­tive AI, like old­er AI, is dis­crim­i­na­tive. As good as it may seem, this AI only replaces a com­pe­tent begin­ner (except that this begin­ner can­not learn!), but not the expert or the spe­cial­ist. But AI, how­ev­er good it may seem, will nev­er replace the expert. Chat­G­PT or DALL‑E can pro­duce very good “drafts”, but these still need to be checked, select­ed and refined by the human.

With Chat­G­PT, what’s impres­sive is the assur­ance with which it responds. In real­i­ty, the intrin­sic qual­i­ty of the results is debat­able. The explo­sion of infor­ma­tion, con­tent and activ­i­ty that will result from the wide and open use of AI will make human exper­tise more nec­es­sary than ever. Indeed, this has been the rule with the “dig­i­tal rev­o­lu­tions”: the more we digi­tise, the more human exper­tise becomes nec­es­sary. How­ev­er, uncer­tain­ty remains as to how dis­rup­tive this sec­ond wave of AI will be for businesses.

Our world explained with science. Every week, in your inbox.

Get the newsletter