3_regulationIA
π Digital π Society
How can artificial intelligence be regulated?

Are we moving towards global regulation of AI?

with Henri Verdier, Ambassador for the Digital Sector and founding member of the Cap Digital competitiveness cluster
On May 14th, 2024 |
5 min reading time
Henri Verdier
Henri Verdier
Ambassador for the Digital Sector and founding member of the Cap Digital competitiveness cluster
Key takeaways
  • AI technology shows enormous promise, but there are a number of pitfalls associated with its use, including deep fakes, human rights violations and the manipulation of public opinion.
  • This constantly evolving multi-purpose tool is prompting intense global reflection over a framework for shared governance.
  • Increasingly, new AI technologies threaten users’ privacy and intellectual property, and require shared governance.
  • By regulating AI at a “national” level, Europe fears it will be weakened and overtaken by other powers.
  • In 2025, France will be hosting a major international summit on AI, which will enable these issues to move forward.
  • Although the technology is evolving rapidly, it is possible to regulate AI in the long term on the basis of fundamental and robust principles.

What are the issues that need to be considered when regulating artificial intelligence?

There are a num­ber of issues to con­sider. First, there is the fear of an exist­en­tial risk fuelled by stor­ies of an AI that could become autonom­ous and des­troy human­ity. How­ever, I don’t see this as a real pos­sib­il­ity – at least not with the mod­els that are being developed right now. Then there’s the fear of an oli­go­poly, with heavy depend­ence on a hand­ful of com­pan­ies. There is also the risk of human rights viol­a­tions. And in this case, a new factor comes into play: for a long time, the most ter­rible actions were reserved for the coun­tries and armies of the most power­ful coun­tries. Now it’s com­mon­place, mass-mar­ket tech­no­logy. The same AI that can detect can­cer could also be used to pre­vent part of the pop­u­la­tion from enter­ing air­ports. We have rarely seen such multi-pur­pose tech­no­lo­gies. AI also makes mali­cious acts pos­sible, such as deep fakes, attacks on our sys­tems, or the manip­u­la­tion of opin­ion. Not to men­tion the imbal­ances it will cre­ate in terms of intel­lec­tu­al prop­erty, pro­tec­tion of the pub­lic domain, pro­tec­tion of pri­vacy, and the upheavals in the work­place. All these chal­lenges, coupled with the desire to really bene­fit from AI’s bound­less prom­ise, have promp­ted intense glob­al reflec­tion on a frame­work for shared governance.

What do you think is the greatest risk?

The greatest risk, in my opin­ion, is that of a mono­poly. It’s a ques­tion of demo­cracy. One of the great dangers is that the glob­al eco­nomy, soci­ety and the media will become ultra-depend­ent on a small oli­go­poly that we are unable to reg­u­late. In line with this ana­lys­is, I’m try­ing to push for the pro­tec­tion of the digit­al com­mons and open source. We need to make sure that there are mod­els that are free to use so that every­one can bene­fit from them. There are already high-qual­ity open source mod­els. So, the ques­tion is: are we going to put enough pub­lic money into train­ing them for the bene­fit of every­one? In my opin­ion, that’s where the real battle lies. Ensur­ing that there are resources for the pub­lic good and that it will be pos­sible to innov­ate without ask­ing for permission.

Is there particular international attention being paid to certain risks associated with AI?

Among the aspects that could lead to inter­na­tion­al coordin­a­tion are the indir­ect effects of AI. These tech­no­lo­gies could dis­rupt the way in which we have con­struc­ted the pro­tec­tion of pri­vacy. Today, the gen­er­al prin­ciple is to pro­tect per­son­al data in order to pro­tect indi­vidu­als. With AI, by using pre­dict­ive mod­els, we can learn a great deal, if not everything, about a per­son. Pre­dict­ive mod­els can take into account a person’s age, where they live and where they work, and give a very good prob­ab­il­ity of their risk of can­cer or their like­li­hood of lik­ing a par­tic­u­lar film. Anoth­er issue being debated at an inter­na­tion­al level is that of intel­lec­tu­al prop­erty. It is now pos­sible to ask an AI to pro­duce paint­ings in the style of Keith Har­ing and to sell them, which poses a prob­lem for the artist’s beneficiaries. 

Is there a real awareness of the need for international regulation?

Ten years ago, there was a tacit agree­ment not to reg­u­late social net­works. Now these com­pan­ies are worth $1,000bn and it’s very dif­fi­cult to change their tra­ject­ory. Most developed coun­tries are telling them­selves that they won’t make the same mis­take again and that they mustn’t miss the boat this time. But that means they know what to do. There is a new aware­ness that reg­u­la­tion must be anchored in an inter­na­tion­al frame­work. There are so few bor­ders in the digit­al world that you can set up in any coun­try and still have a pres­ence in anoth­er. Clearly, there is intense inter­na­tion­al activ­ity around the major issues men­tioned above: exist­en­tial risk, the ques­tion of eco­nom­ic sov­er­eignty and mali­cious actors.

Is the international level the only relevant one for regulating artificial intelligence?

No. The “region­al” level (i.e. a coher­ent group of coun­tries) is also very import­ant. Europe has learned the hard way that only reg­u­lat­ing all digit­al uses at nation­al level is not enough to bring the giants to heel. When we estab­lish a European frame­work, they nego­ti­ate. But that cre­ates oth­er ten­sions, and we don’t want to encour­age an inter­na­tion­al order based on extra-ter­rit­ori­al applic­a­tion decisions. So, the idea that the inter­na­tion­al level is the right size for think­ing about the digit­al world has taken hold, and it’s not really ques­tioned any more.

Under the law, we have the right to pro­hib­it the devel­op­ment of cer­tain AI. But we are afraid that the oth­er powers will con­tin­ue to do so, and that we will become weak and obsol­ete. It’s a ques­tion of being both pro-innov­a­tion and pro-secur­ity for cit­izens, and that’s why every­one would like decisions to be col­lect­ive. These tech­no­lo­gies are chan­ging very fast, and cre­at­ing a lot of power, so we don’t want uni­lat­er­al disarmament.

What progress has been made on the development and implementation of this framework? 

The eth­ic­al frame­work is being dis­cussed. Dis­cus­sions are tak­ing place in dozens of for­ums with­in busi­ness, civil soci­ety, among research­ers, at the UN, the G7, the OECD and as part of the French ini­ti­at­ive for a glob­al part­ner­ship for arti­fi­cial intel­li­gence. It’s also a dip­lo­mat­ic effort that’s com­ing out of the embassies, with debates at the Inter­net Gov­ernance For­um and the annu­al sum­mit on human rights, RightsCon. Little by little, ideas are tak­ing root or becom­ing estab­lished. We are still in the pro­cess of identi­fy­ing the con­cepts on which an agree­ment will be based. An ini­tial con­sensus is emer­ging around cer­tain prin­ciples: no use should be made that is con­trary to human rights; the tech­no­logy must be in the interest of the users; it must be proven that pre­cau­tions have been taken to ensure that there is no bias in the edu­ca­tion of the mod­els; the need for trans­par­ency so that experts can audit the mod­els. Then it will be time to look for treaties.

There are also debates about a demo­crat­ic frame­work. How can we ensure that these com­pan­ies are not manip­u­lat­ing us? Do we have the right to know what data the AI has been trained with? The notions of secur­ity in the face of exist­en­tial risk were much dis­cussed in the UK at last year’s world sum­mit. Now the con­ver­sa­tion is turn­ing to the future of work and intel­lec­tu­al prop­erty, for example. In 2025, France will be host­ing a major inter­na­tion­al sum­mit on AI, which will help move these issues forward.

Is reflection regarding AI moving as fast as the technology?

Many people think not. Per­son­ally, I think that a good text is fun­da­ment­al. The Declar­a­tion of Human Rights is still val­id, but tech­no­lo­gies have changed. The 1978 Data Pro­tec­tion Act has become the GDPR, but the prin­ciple of user con­sent for data to be cir­cu­lated has not aged a day. If we can find robust prin­ciples, we can pro­duce texts that will stand the test of time. I think we could reg­u­late AI with the GDPR, the respons­ib­il­ity of the media and con­tent pub­lish­ers, and two or three oth­er texts that already exist. It’s not a giv­en that we need an entirely new framework.

Sirine Azououai

Support accurate information rooted in the scientific method.

Donate