1_Comprendre
π Science and technology π Digital
Generative AI: threat or opportunity?

ChatGPT, Midjourney: everything you need to know about generative AI

Éric Moulines, Professor of Statistical Learning at École Polytechnique (IP Paris), Hatim Bourfoune, AI research engineer at IDRIS (CNRS) and Pierre Cornette, AI support engineer at IDRIS (CNRS)
On November 21st, 2023 |
5 min reading time
Eric Moulines
Éric Moulines
Professor of Statistical Learning at École Polytechnique (IP Paris)
Hatim Bourfoune
Hatim Bourfoune
AI research engineer at IDRIS (CNRS)
Pierre Cornette
Pierre Cornette
AI support engineer at IDRIS (CNRS)
Key takeaways
  • Generative AI can create content from a database that has been ingested and according to the indications they are given.
  • These technologies, which remain new, are still being developed and there are still several areas for improvement: reliability, bias in the database, etc.
  • ChatGPT and Bloom are just two models of generative AI, but the concept can be extended to a multitude of applications.
  • These technologies raise a few questions, such as their ecological impact and the risk of using them for potentially malicious purposes.

It’s all the talk these days, and Chat­G­PT is arriv­ing in our soci­eties like a ver­i­ta­ble rev­o­lu­tion. So, it is hard­ly sur­pris­ing that, giv­en the wide-rang­ing appli­ca­tions of these tools, their arrival is fuelling so much debate. But do we real­ly know how this AI works?

A gen­er­a­tive AI can gen­er­ate writ­ten, visu­al, or audi­ble con­tent by ingest­ing con­tent. By giv­ing it indi­ca­tions as input, the AI can cre­ate as out­put any con­tent that cor­re­sponds to the indi­ca­tions ingest­ed. “Here, we’re look­ing to gen­er­ate orig­i­nal con­tent,” explains Éric Moulines, pro­fes­sor of sta­t­ic learn­ing at École Poly­tech­nique (IP Paris). “This orig­i­nal con­tent will be obtained by gen­er­al­is­ing the infor­ma­tion seen dur­ing learning”.

There are cur­rent­ly two main types of AI mod­el. GPTs (Gen­er­a­tive Pre-trained Trans­form­ers), such as Chat­G­PT, and Dif­fu­sion Mod­els. “By giv­ing it text as input, the AI will be able to under­stand the con­text through a mech­a­nism called atten­tion,” adds Hatim Bour­fone, AI research engi­neer at IDRIS (CNRS). “Its out­put will there­fore be a list of all the words in the dic­tio­nary that it knows [learned dur­ing train­ing phase] on which it will have placed a prob­a­bil­i­ty”. Depend­ing on the data­base it has trained on, the tool can be pro­grammed for var­i­ous functions.

Bloom, for exam­ple, the AI devel­oped by the team of which Hatim Bour­foune is a mem­ber at IDRIS, is a tool that helps researchers to express them­selves in sev­er­al lan­guages. “The pri­ma­ry aim of the Bloom mod­el,” adds Pierre Cor­nette, also a mem­ber of the IDRIS team, “is to learn a lan­guage. To do this, we give them a whole bunch of texts to ingest, ask­ing them to pre­dict the next word in the giv­en text, and we cor­rect them if they get it wrong”. 

A recent, still immature technology

“The first gen­er­a­tive AI mod­els are not even 10 years old,” explains Éric Moulines. “The first rev­o­lu­tion in this field was the arrival of trans­form­ers – a tech­nol­o­gy per­fect­ing this atten­tion mech­a­nism – in 2017. Four years lat­er, we already have com­mer­cial prod­ucts. So, there has been con­sid­er­able accel­er­a­tion, much faster than on any oth­er Deep Learn­ing mod­el.” Mod­els like Chat­G­PT are there­fore still very new, and there are still many things that can, or must, be improved.

The ques­tion of the reli­a­bil­i­ty of the answers giv­en is still not cer­tain: “Chat­G­PT is not famil­iar with the notion of reli­a­bil­i­ty”, admits the pro­fes­sor. “This type of AI is inca­pable of assess­ing the verac­i­ty of the answers it gives.” This leaves room for an eas­i­ly observ­able phe­nom­e­non known as ‘hal­lu­ci­na­tions’. “It is pos­si­ble [for Chat­G­PT] to gen­er­ate con­tent that seems plau­si­ble, but is rig­or­ous­ly false,” he adds. “It uses com­plete­ly prob­a­bilis­tic rea­son­ing to gen­er­ate sequences of words. Depend­ing on the con­text, it will gen­er­ate strings of words that seem the most likely.”

Apart from its abil­i­ty to invent book titles, oth­er lim­i­ta­tions should be borne in mind when using it. By apply­ing Deep Learn­ing meth­ods, these AIs go through a train­ing phase dur­ing which they ingest a quan­ti­ty of exist­ing texts. In this way, they will incor­po­rate the bias­es of this data­base into their learn­ing. Geopo­lit­i­cal ques­tions are a good exam­ple of this. “If you ask it geopo­lit­i­cal ques­tions, Chat­G­PT will essen­tial­ly reflect the West­ern world,” says Éric Moulines. “If we show the answers giv­en to a Chi­nese per­son, he will cer­tain­ly not agree with what is said about the sov­er­eign­ty of such and such a coun­try over a giv­en territory.”

A range of applications

Each mod­el will there­fore be able to gen­er­ate con­tent accord­ing to the data­base it has been trained on. This is per­haps where the mag­ic of this tech­nol­o­gy lies, because, know­ing this, a myr­i­ad of appli­ca­tions can be cre­at­ed. “A good anal­o­gy for this tech­nol­o­gy would be that of an engine,” says Pierre Cor­nette. “You could have a very pow­er­ful engine, but it can be used for either a trac­tor or a rac­ing car.” For exam­ple, Chat­G­PT is a rac­ing car, and its engine is GPT‑4. “The advan­tage is that the tech­nolo­gies are con­cen­trat­ed in what is the engine,” he con­tin­ues, “and you don’t need to under­stand how it works to use the race car.”

Bloom is an exam­ple of anoth­er use for this type of mod­el: “A year ago, Bloom was one of the only mod­els that was com­plete­ly open to research,” insists Hatim Bour­foune. In oth­er words, any­one could down­load the mod­el and use it for their own research. Trained with a data­base of var­i­ous sci­en­tif­ic arti­cles in many lan­guages, this mod­el can be extreme­ly use­ful for sci­en­tif­ic research. Pierre Cor­nette adds: “There is also anoth­er Big­code project, run by the same peo­ple, which pro­motes a mod­el spe­cial­is­ing in com­put­er code. We ask it for a func­tion, sim­ply describ­ing its action, and it can write it for us in the desired language.”

The pop­u­lar­i­ty of Chat­G­PT shows just how impor­tant it is for the gen­er­al pub­lic. Bing has also inte­grat­ed it into its search engine with a view to com­pet­ing with Google. This inte­gra­tion makes it pos­si­ble to counter one of the lim­i­ta­tions of this tech­nol­o­gy: the reli­a­bil­i­ty of the answers giv­en. By giv­ing the sources used to com­pile its response, the search engine enables us to under­stand and ver­i­fy them bet­ter. Even more recent­ly, Adobe has inte­grat­ed a gen­er­a­tive AI mod­el into var­i­ous soft­ware appli­ca­tions in its suite (such as Pho­to­shop and Illus­tra­tor), reveal­ing yet anoth­er impres­sive appli­ca­tion of this technology.

“An exciting future”

All this can only mean an excit­ing future for this inno­va­tion. How­ev­er, the range of appli­ca­tions rais­es ques­tions about its pos­si­ble uses. “As with all tools, there can be mali­cious uses,” admits Hatim Bour­foune. “That’s why com­pa­nies like Ope­nAI put up dif­fer­ent secu­ri­ty bar­ri­ers.” Today, many of the ques­tions put to Chat­G­PT remain unan­swered, because the AI believes that they vio­late its con­tent policy.

Even so, this tech­nol­o­gy is still in its infan­cy. “That’s the prin­ci­ple of research – we’re still at ground zero,” says Éric Moulines. “It’s amaz­ing that it even works.” There are still many loop­holes to be filled, par­tic­u­lar­ly from a legal point of view. As explained, the con­tent gen­er­at­ed by these tools will be built using an exist­ing data­base. The AI will there­fore “copy” exist­ing texts or works with­out cit­ing their orig­i­nal author. “This pos­es a major prob­lem,” he con­tin­ues, “because the rights hold­ers of the con­tent used to gen­er­ate these new images [or texts] are not respected.”

Despite its var­i­ous lim­i­ta­tions, the poten­tial remains enor­mous: “What excites me … is that the progress to be made is enor­mous,” adds the pro­fes­sor. “But the trend and the deriv­a­tives are enor­mous. It’s hap­pen­ing very quick­ly and there’s very excit­ing com­pe­ti­tion in these sub­jects.” Speak­ing of deriv­a­tives, Bloom illus­trates this per­fect­ly. Use­ful for research, it is also a lin­guis­tic tool that could make it pos­si­ble to save dead lan­guages, but also to trans­late sci­en­tif­ic texts into less­er-spo­ken lan­guages to facil­i­tate the dis­sem­i­na­tion of research.

How­ev­er, its “excit­ing” future may be ham­pered by its con­sid­er­able car­bon impact. “These mod­els require a lot of mem­o­ry, because they need to store a huge amount of data,” explains Éric Moulines. “Today, we esti­mate that Ope­nAI con­sumes as much mem­o­ry as the grid in a coun­try like Bel­gium.” This is the prob­lem that will sure­ly be the most com­pli­cat­ed to solve.

Pablo Andres

Our world explained with science. Every week, in your inbox.

Get the newsletter