Home / Chroniques / Building responsible AI: ethics, sovereignty and the planet
Généré par l'IA / Generated using AI
π Digital

Building responsible AI: ethics, sovereignty and the planet

Jérôme Béranger_VF
Jérôme Béranger
Research Associate at Université Toulouse 3 and AI ethics expert at EuropIA Institute
Fatima AIT THAMI_VF
Fatima Ait Thami
Digital Ethics Consultant at GoodAlgo
Key takeaways
  • With the emergence of new technologies, ethical approaches to AI are now emerging.
  • Ethical AI aims to integrate ethical recommendations into the AI code, whereas trustworthy AI seeks instead to provide a framework for automated decisions.
  • Ethics by Design seeks to ensure the transparency and clarity of AI systems and their purposes; this must be complemented by Ethics by Evolution.
  • Ethics by Evolution involves continuously adapting ethical criteria and indicators throughout the AI system’s learning period.
  • One objective of Ethics by Evolution is to support tech professionals in developing human-centred technology.

Major tech­no­lo­gic­al advances have always sparked both hope and con­cern, as their poten­tial abuses primar­ily reflect those of the soci­et­ies that pro­duce them. The digit­al revolu­tion and the rise of arti­fi­cial intel­li­gence (AI), how­ever, mark an unpre­ced­en­ted shift: for the first time, the future of human­ity is being shaped by lines of code and by sys­tems cap­able of act­ing on a large scale across all spheres of soci­ety, in both pro­fes­sion­al and per­son­al life.

The break­neck pace of these innov­a­tions exceeds our col­lect­ive capa­city for anti­cip­a­tion and reg­u­la­tion, even affect­ing human cog­ni­tion. Against this back­drop of pro­found civil­isa­tion­al uncer­tainty, it is becom­ing essen­tial to struc­ture and eval­u­ate the eth­ics of AI sys­tems, to steer these tech­no­lo­gies towards serving the human interest.

Ethics by Design and Ethics by Evolution

Soci­ety does not yet have any truly estab­lished uni­ver­sal rules for embed­ding human val­ues at the heart of AI sys­tems. Although there are a grow­ing num­ber of ini­ti­at­ives, such as the European AI Act which came into force on 1st August 2024 and aims to ensure trust­worthy AI, the rap­id rise of digit­al tech­no­logy makes it urgent to define a com­mon eth­ic­al and sov­er­eign frame­work. The aim is to guar­an­tee the pur­pose, secur­ity, trans­par­ency, envir­on­ment­al respons­ib­il­ity and sov­er­eignty of intel­li­gent sys­tems to strengthen trust in their use.

Con­sequently, design­ers of AI applic­a­tions must incor­por­ate the risks, lim­it­a­tions and soci­et­al impacts of their sys­tems from the out­set. Two com­ple­ment­ary approaches can be dis­tin­guished: “Eth­ic­al AI”, which involves dir­ectly embed­ding prin­ciples, mor­al reas­on­ing mech­an­isms and eth­ic­al recom­mend­a­tions into the code to enable the machine to make eth­ic­ally sound decisions; and “Trust­worthy AI”, which primar­ily aims to over­see and con­trol auto­mated decisions to ensure they com­ply with col­lect­ive values.

The first approach appears to be more struc­tur­al and evol­u­tion­ary, as it allows eth­ics to be embed­ded at the very heart of the design and evol­u­tion of sys­tems, fol­low­ing a logic of Eth­ics by Design 1 and then Eth­ics by Evol­u­tion 2.

The con­cepts of mor­al “val­ues” and “prin­ciples” are com­plex and dif­fi­cult to dir­ectly trans­late into com­pu­ta­tion­al struc­tures. On the oth­er hand, it is more prac­tic­al to break them down into expli­cit stand­ards and rules, that is to say, into tan­gible instruc­tions and recom­mend­a­tions applic­able in defined con­texts – this is what might be called the pro­cess of decod­ing and then eth­ic­al encod­ing of AI. From this per­spect­ive, eth­ics must be integ­rated right from the design stage of sys­tems, by anti­cip­at­ing poten­tial dilem­mas and trans­lat­ing shared prin­ciples into rules embed­ded in the code. Giv­en AI’s decision-mak­ing autonomy and learn­ing cap­ab­il­it­ies, this integ­ra­tion is essen­tial to ensure the trans­par­ency, safety and account­ab­il­ity of digit­al technologies.

The chal­lenge lies in devel­op­ing algorithms cap­able of act­ing in con­texts that have eth­ic­al implic­a­tions. This ambi­tion gives rise to two main cat­egor­ies of risk: those linked to design flaws and those arising from learn­ing mech­an­isms. Giv­ing dir­ec­tion to AI there­fore involves artic­u­lat­ing three com­ple­ment­ary dimen­sions. First, ori­ent­a­tion and pur­pose, which define the strategy and gov­ernance of sys­tems. Second, mean­ing, which reminds us that AI must remain a tool at the ser­vice of soci­ety, fos­ter­ing human-machine com­ple­ment­ar­ity, as well as explain­ab­il­ity and inclu­sion. And finally, explan­a­tion, which involves col­lect­ive reflec­tion on the object­ives pur­sued and their legit­im­acy. Imple­ment­ing such an eth­ic­al frame­work remains com­plex, how­ever, and raises unavoid­able questions.

The whole point of eth­ic­al man­age­ment lies in provid­ing mean­ing and pur­pose to the organ­isa­tion put in place (see Table 1).

Table 1. Stra­tegic deploy­ment of AI with­in an organisation

Hence, it is crit­ic­al to adopt an inclus­ive approach that involves all stake­hold­ers from the out­set of the design phase of AI sys­tems, in order to ensure trans­par­ency, explain­ab­il­ity and clar­ity regard­ing their pur­poses: this is the prin­ciple of Eth­ics by Design. This approach must then be exten­ded through­out the entire AI life­cycle – deploy­ment, use and evol­u­tion – by con­tinu­ally adapt­ing eth­ic­al cri­ter­ia and indic­at­ors in line with the system’s learn­ing pro­cess: this is Eth­ics by Evol­u­tion.

Algorithmic ethics

Eth­ics as applied to digit­al tech­no­logy lies in the inten­tion dir­ec­ted towards the pur­pose and mean­ing of an algorithmic sys­tem. It can be divided into three types of eth­ics (see Table 2) 3:

  • Descript­ive eth­ics: this applies to intrins­ic value (design) . It con­sti­tutes an eth­ics of applic­a­tion and alloc­a­tion in the form of prac­tice, involving the means, mech­an­isms, chan­nels and pro­ced­ures implemented;
  • Norm­at­ive eth­ics: this con­cerns man­age­ment value (imple­ment­a­tion). It forms an eth­ics of reg­u­la­tion with a deont­o­lo­gic­al aspect via estab­lished stand­ards, codes and rules;
  • Reflect­ive eth­ics: this applies to oper­a­tion­al value (use). It rep­res­ents an eth­ics of legit­im­isa­tion based on ques­tion­ing the found­a­tions and pur­poses through human prin­ciples and values.

The artic­u­la­tion and arrange­ment of these three types of eth­ics apply to the entire life cycle of an algorithmic sys­tem (design – imple­ment­a­tion – use) to inform our Eth­ics by Evolution.

Table 2. The struc­ture of algorithmic ethics

A pro­act­ive approach to eth­ic­al inclu­sion, based on user involve­ment and ongo­ing inter­ac­tion with AI sys­tems, enables the gradu­al strength­en­ing of trust, reli­ab­il­ity and trans­par­ency in algorithms. Whilst algorithms are often accused of dis­crim­in­a­tion or bias, we must nev­er­the­less ques­tion human respons­ib­il­ity – an algorithm remains a tool, shaped by the data it is fed, the design choices made and the ways in which we use it. The real risks lie in par­tic­u­lar in biases (cog­nit­ive, stat­ist­ic­al or economic0 which can be embed­ded, impli­citly or expli­citly, in mod­els and render them unfair or mali­cious. This real­ity com­pels us to take a step back from our prac­tices and ques­tion the dir­ec­tion we are tak­ing: are we truly build­ing a “digit­al human­ity” that lives up to our mor­al and soci­et­al values?

In these cir­cum­stances, it becomes essen­tial to make spe­cif­ic recom­mend­a­tions regard­ing the eth­ics, envir­on­ment­al respons­ib­il­ity and sov­er­eignty of AI sys­tems. We can provide a few examples of meas­ures to be imple­men­ted, as shown in Table 3 below.

Table 3. Recom­mend­a­tions relat­ing to eth­ics, envir­on­ment­al respons­ib­il­ity and the sov­er­eignty of AI systems

Finally, it seems essen­tial to guide tech pro­fes­sion­als towards a respons­ible devel­op­ment approach, incor­por­at­ing a form of emo­tion­al intel­li­gence and a heightened aware­ness of the human impacts of their work. This is pre­cisely the aim of an Eth­ics by Evol­u­tion approach, designed to put people back at the heart of tech­no­logy by assess­ing every stage of an algorithm’s life­cycle and meas­ur­ing its level of eth­ic­al commitment.

This approach is struc­tured in sev­er­al phases. The first focuses on con­tex­tu­al eth­ics: it exam­ines the system’s object­ives, the con­di­tions of its design, the nature and rep­res­ent­at­ive­ness of the data used, as well as rais­ing teams’ aware­ness of the risks of bias and soci­et­al issues. The second phase con­sists of an exper­i­ment­al eval­u­ation of the sys­tem, through the ana­lys­is of input and out­put data and the meas­ure­ment of indic­at­ors such as reli­ab­il­ity, inter­pretab­il­ity, robust­ness and the absence of dis­crim­in­a­tion. Finally, an eth­ics of res­ults ensures con­tinu­ous mon­it­or­ing of the algorithm in real-world use, to anti­cip­ate and adjust its behaviour.

Ulti­mately, whilst AI holds great prom­ise, it also entails risks that must be man­aged to ensure its com­pli­ance with the law, mor­al val­ues and the com­mon good. Integ­rat­ing eth­ic­al cri­ter­ia today, des­pite the addi­tion­al com­plex­ity they entail, is an essen­tial pre­requis­ite for estab­lish­ing a genu­ine cul­ture of digit­al eth­ics and ensur­ing secur­ity, mean­ing and trust in data pro­cessing with­in organ­isa­tions and regions.

1An approach that integ­rates eth­ic­al require­ments and recom­mend­a­tions from the very out­set of ICT design.
2An approach that incor­por­ates eth­ic­al recom­mend­a­tions and rules, in an evolving man­ner over time, through­out the entire life­cycle of ICTs, i.e. dur­ing their imple­ment­a­tion and ongo­ing use.
3J. Béranger. (2021). The Social Respons­ib­il­ity of Arti­fi­cial Intel­li­gence. ISTE Edi­tions.

Support accurate information rooted in the scientific method.

Donate