3_regulationIA
π Digital π Society
How can artificial intelligence be regulated?

Are we moving towards global regulation of AI ?

with Henri Verdier, Ambassador for the Digital Sector and founding member of the Cap Digital competitiveness cluster
On May 14th, 2024 |
5 min reading time
Henri Verdier
Henri Verdier
Ambassador for the Digital Sector and founding member of the Cap Digital competitiveness cluster
Key takeaways
  • AI technology shows enormous promise, but there are a number of pitfalls associated with its use, including deep fakes, human rights violations and the manipulation of public opinion.
  • This constantly evolving multi-purpose tool is prompting intense global reflection over a framework for shared governance.
  • Increasingly, new AI technologies threaten users’ privacy and intellectual property, and require shared governance.
  • By regulating AI at a “national” level, Europe fears it will be weakened and overtaken by other powers.
  • In 2025, France will be hosting a major international summit on AI, which will enable these issues to move forward.
  • Although the technology is evolving rapidly, it is possible to regulate AI in the long term on the basis of fundamental and robust principles.

What are the issues that need to be considered when regulating artificial intelligence ?

There are a num­ber of issues to consi­der. First, there is the fear of an exis­ten­tial risk fuel­led by sto­ries of an AI that could become auto­no­mous and des­troy huma­ni­ty. Howe­ver, I don’t see this as a real pos­si­bi­li­ty – at least not with the models that are being deve­lo­ped right now. Then there’s the fear of an oli­go­po­ly, with hea­vy depen­dence on a hand­ful of com­pa­nies. There is also the risk of human rights vio­la­tions. And in this case, a new fac­tor comes into play : for a long time, the most ter­rible actions were reser­ved for the coun­tries and armies of the most power­ful coun­tries. Now it’s com­mon­place, mass-mar­ket tech­no­lo­gy. The same AI that can detect can­cer could also be used to prevent part of the popu­la­tion from ente­ring air­ports. We have rare­ly seen such mul­ti-pur­pose tech­no­lo­gies. AI also makes mali­cious acts pos­sible, such as deep fakes, attacks on our sys­tems, or the mani­pu­la­tion of opi­nion. Not to men­tion the imba­lances it will create in terms of intel­lec­tual pro­per­ty, pro­tec­tion of the public domain, pro­tec­tion of pri­va­cy, and the uphea­vals in the work­place. All these chal­lenges, cou­pled with the desire to real­ly bene­fit from AI’s bound­less pro­mise, have promp­ted intense glo­bal reflec­tion on a fra­me­work for sha­red governance.

What do you think is the greatest risk ?

The grea­test risk, in my opi­nion, is that of a mono­po­ly. It’s a ques­tion of demo­cra­cy. One of the great dan­gers is that the glo­bal eco­no­my, socie­ty and the media will become ultra-dependent on a small oli­go­po­ly that we are unable to regu­late. In line with this ana­ly­sis, I’m trying to push for the pro­tec­tion of the digi­tal com­mons and open source. We need to make sure that there are models that are free to use so that eve­ryone can bene­fit from them. There are alrea­dy high-qua­li­ty open source models. So, the ques­tion is : are we going to put enough public money into trai­ning them for the bene­fit of eve­ryone ? In my opi­nion, that’s where the real bat­tle lies. Ensu­ring that there are resources for the public good and that it will be pos­sible to inno­vate without asking for permission.

Is there particular international attention being paid to certain risks associated with AI ?

Among the aspects that could lead to inter­na­tio­nal coor­di­na­tion are the indi­rect effects of AI. These tech­no­lo­gies could dis­rupt the way in which we have construc­ted the pro­tec­tion of pri­va­cy. Today, the gene­ral prin­ciple is to pro­tect per­so­nal data in order to pro­tect indi­vi­duals. With AI, by using pre­dic­tive models, we can learn a great deal, if not eve­ry­thing, about a per­son. Pre­dic­tive models can take into account a person’s age, where they live and where they work, and give a very good pro­ba­bi­li­ty of their risk of can­cer or their like­li­hood of liking a par­ti­cu­lar film. Ano­ther issue being deba­ted at an inter­na­tio­nal level is that of intel­lec­tual pro­per­ty. It is now pos­sible to ask an AI to pro­duce pain­tings in the style of Keith Haring and to sell them, which poses a pro­blem for the artist’s beneficiaries. 

Is there a real awareness of the need for international regulation ?

Ten years ago, there was a tacit agree­ment not to regu­late social net­works. Now these com­pa­nies are worth $1,000bn and it’s very dif­fi­cult to change their tra­jec­to­ry. Most deve­lo­ped coun­tries are tel­ling them­selves that they won’t make the same mis­take again and that they mustn’t miss the boat this time. But that means they know what to do. There is a new awa­re­ness that regu­la­tion must be ancho­red in an inter­na­tio­nal fra­me­work. There are so few bor­ders in the digi­tal world that you can set up in any coun­try and still have a pre­sence in ano­ther. Clear­ly, there is intense inter­na­tio­nal acti­vi­ty around the major issues men­tio­ned above : exis­ten­tial risk, the ques­tion of eco­no­mic sove­rei­gn­ty and mali­cious actors.

Is the international level the only relevant one for regulating artificial intelligence ?

No. The “regio­nal” level (i.e. a coherent group of coun­tries) is also very impor­tant. Europe has lear­ned the hard way that only regu­la­ting all digi­tal uses at natio­nal level is not enough to bring the giants to heel. When we esta­blish a Euro­pean fra­me­work, they nego­tiate. But that creates other ten­sions, and we don’t want to encou­rage an inter­na­tio­nal order based on extra-ter­ri­to­rial appli­ca­tion deci­sions. So, the idea that the inter­na­tio­nal level is the right size for thin­king about the digi­tal world has taken hold, and it’s not real­ly ques­tio­ned any more.

Under the law, we have the right to pro­hi­bit the deve­lop­ment of cer­tain AI. But we are afraid that the other powers will conti­nue to do so, and that we will become weak and obso­lete. It’s a ques­tion of being both pro-inno­va­tion and pro-secu­ri­ty for citi­zens, and that’s why eve­ryone would like deci­sions to be col­lec­tive. These tech­no­lo­gies are chan­ging very fast, and crea­ting a lot of power, so we don’t want uni­la­te­ral disarmament.

What progress has been made on the development and implementation of this framework ? 

The ethi­cal fra­me­work is being dis­cus­sed. Dis­cus­sions are taking place in dozens of forums within busi­ness, civil socie­ty, among resear­chers, at the UN, the G7, the OECD and as part of the French ini­tia­tive for a glo­bal part­ner­ship for arti­fi­cial intel­li­gence. It’s also a diplo­ma­tic effort that’s coming out of the embas­sies, with debates at the Inter­net Gover­nance Forum and the annual sum­mit on human rights, Rights­Con. Lit­tle by lit­tle, ideas are taking root or beco­ming esta­bli­shed. We are still in the pro­cess of iden­ti­fying the concepts on which an agree­ment will be based. An ini­tial consen­sus is emer­ging around cer­tain prin­ciples : no use should be made that is contra­ry to human rights ; the tech­no­lo­gy must be in the inter­est of the users ; it must be pro­ven that pre­cau­tions have been taken to ensure that there is no bias in the edu­ca­tion of the models ; the need for trans­pa­ren­cy so that experts can audit the models. Then it will be time to look for treaties.

There are also debates about a demo­cra­tic fra­me­work. How can we ensure that these com­pa­nies are not mani­pu­la­ting us ? Do we have the right to know what data the AI has been trai­ned with ? The notions of secu­ri­ty in the face of exis­ten­tial risk were much dis­cus­sed in the UK at last year’s world sum­mit. Now the conver­sa­tion is tur­ning to the future of work and intel­lec­tual pro­per­ty, for example. In 2025, France will be hos­ting a major inter­na­tio­nal sum­mit on AI, which will help move these issues forward.

Is reflection regarding AI moving as fast as the technology ?

Many people think not. Per­so­nal­ly, I think that a good text is fun­da­men­tal. The Decla­ra­tion of Human Rights is still valid, but tech­no­lo­gies have chan­ged. The 1978 Data Pro­tec­tion Act has become the GDPR, but the prin­ciple of user consent for data to be cir­cu­la­ted has not aged a day. If we can find robust prin­ciples, we can pro­duce texts that will stand the test of time. I think we could regu­late AI with the GDPR, the res­pon­si­bi­li­ty of the media and content publi­shers, and two or three other texts that alrea­dy exist. It’s not a given that we need an enti­re­ly new framework.

Sirine Azououai

Support accurate information rooted in the scientific method.

Donate