Europe digital map, European Union cyberspace network, tech connectivity electronic circuit
Généré par l'IA / Generated using AI
π Digital π Society
How can artificial intelligence be regulated?

AI Act: what are the implications for sensitive sectors in Europe?

with Jean de Bodinat, Founder and General Manager at Rakam AI and Teacher at Ecole Polytechnique (IP Paris) and Solène Gérardin, Lawyer, AI Act and GDPR Specialist
On October 14th, 2025 |
6 min reading time
Jean de Bodinat_VF
Jean de Bodinat
Founder and General Manager at Rakam AI and Teacher at Ecole Polytechnique (IP Paris)
Solène Gérardin_VF
Solène Gérardin
Lawyer, AI Act and GDPR Specialist
Key takeaways
  • The EU’s Artificial Intelligence Act introduces a European legal framework, a comprehensive instrument to regulate AI use-cases, emphasising risk-based governance across sectors.
  • Businesses in Europe must adhere to new compliance requirements, especially regarding high-risk AI systems concerning health, safety, and fundamental rights.
  • Early integration of AI Act compliance can transform complex legislation into evident strategic advantages: enhanced trust, improved fairness, and competitive positioning.
  • Leading sectors where regulations apply to include but are not limited to, education, recruitment, healthcare, and financial services, transparency and bias reduction are prevalent.
  • Proactive legal and technical involvement can be game changing in recent times, with AI regulations, offering companies a chance to shape the future AI ecosystem responsibly.

When Ama­zon’s AI recruit­ment tool faced scruti­ny for sys­tem­at­i­cal­ly dis­crim­i­nat­ing against women can­di­dates1, or when Hire­Vue’s facial analy­sis algo­rithm sab­o­taged neu­ro­di­ver­gent job seek­ers2, the tech world got a real­i­ty check of what unreg­u­lat­ed AI is capa­ble of. Today, Europe is lead­ing the legal way for­ward in the AI leg­is­la­tion domain, one risk-crit­i­cal sec­tor at a time. Europe’s Arti­fi­cial Intel­li­gence Act (or « AI Act »), the world’s first com­pre­hen­sive AI reg­u­la­to­ry frame­work, claims to avoid such fail­ures while also trans­form­ing AI deploy­ment in com­pa­nies, espe­cial­ly in risk-crit­i­cal sectors.

Thriving in the world of AI

The stakes are sub­stan­tial. French enter­pris­es alone invest­ed well over €1bn in AI tech­nolo­gies in 2023, 35% of French com­pa­nies active­ly deploy­ing AI sys­tems accord­ing to Busi­ness France3. The trend is obvi­ous; indus­try lev­el AI deploy­ment is increas­ing. While the AI Act’s bind­ing oblig­a­tions took effect, only this month4, the fun­da­men­tal choice lies with com­pa­nies, either treat com­pli­ance as a reg­u­la­to­ry bur­den on the backs of legal depart­ments or trans­form it into a dis­tinc­tive capa­bil­i­ty, help­ing you to thrive in the AI world.

“Com­pa­nies that inte­grate legal require­ments as design prin­ci­ples can trans­form com­pli­ance into a strate­gic advan­tage,” explains Jean de Bod­i­nat, founder of Rakam AI and teacher at Ecole Poly­tech­nique (IP Paris). The reg­u­la­tion estab­lish­es a risk-based clas­si­fi­ca­tion sys­tem, with “high-risk” AI sys­tems those affect­ing health, safe­ty, or fun­da­men­tal rights fac­ing the strictest require­ments. Some cri­te­ria that gov­ern the foun­da­tion of these clas­si­fi­ca­tion sys­tems include manda­to­ry risk man­age­ment, data gov­er­nance, tech­ni­cal doc­u­men­ta­tion, human over­sight, and qual­i­ty man­age­ment sys­tems. Oper­a­tional per­for­mance and mar­ket posi­tion­ing just got an upgrade, just anoth­er feath­er in the cap of the EU-AI Act. In the arti­cle, we out­line four sec­tors where AI “high-risk” sys­tems are par­tic­u­lar­ly affect­ed by the legislation.

Education: AI grading transparency

In French class­rooms and online learn­ing plat­forms, AI-dri­ven assess­ment tools are not only trans­form­ing edu­ca­tion but fac­ing a reg­u­la­to­ry bur­den, now more than ever. Edu­ca­tion­al AI sys­tems evi­dent­ly fall under the Act’s “high-risk” cat­e­go­ry due to their direct influ­ence on aca­d­e­m­ic per­for­mance. Con­sid­er Lingueo’s e‑LATE plat­form5, spe­cial­is­ing in tai­lored lan­guage assess­ments using speech recog­ni­tion and auto­mat­ed scor­ing. Stu­dents receive tar­get­ed feed­back while teach­ers main­tain over­sight through ded­i­cat­ed dash­boards. The sys­tem exem­pli­fies how edu­ca­tion­al tech­nol­o­gy can meet AI Act require­ments while deliv­er­ing edu­ca­tion­al value.

Stu­dents and teach­ers need to under­stand how auto­mat­ed deci­sions are made.

“The chal­lenge isn’t just tech­ni­cal accu­ra­cy, it’s about fair­ness and trans­paren­cy,” notes Solène Gérardin, a lawyer and AI Act spe­cial­ist who advis­es busi­ness­es on com­pli­ance. “Stu­dents and edu­ca­tors need to under­stand how auto­mat­ed deci­sions are made.” The plat­form address­es this by sep­a­rat­ing AI con­tent gen­er­a­tion from eval­u­a­tion pipelines, imple­ment­ing robust con­tent fil­ters, and pro­vid­ing clear inter­faces for edu­ca­tor over­sight. Most crit­i­cal­ly, it main­tains com­pre­hen­sive log­ging for auditabil­i­ty, a require­ment that is becom­ing stan­dard­ised across edu­ca­tion­al technology.

Exter­nal exam­ples rein­force this approach. Plat­forms like Grade­scope and Knew­ton are adopt­ing explain­able AI solu­tions that help both stu­dents and teach­ers under­stand auto­mat­ed grad­ing deci­sions. Their suc­cess demon­strates that trans­paren­cy require­ments can actu­al­ly improve edu­ca­tion­al out­comes by build­ing trust between learn­ers, edu­ca­tors, and AI systems.

Removing hiring biases

Per­haps nowhere is the AI Act’s impact more vis­i­ble than in recruit­ment, where auto­mat­ed can­di­date eval­u­a­tion sys­tems are trans­form­ing and some­times dis­tort­ing hir­ing prac­tices. These sys­tems, which screen resumes and rank appli­cants using nat­ur­al lan­guage pro­cess­ing, rep­re­sent a text­book exam­ple of high-risk AI under the new reg­u­la­tion. The cau­tion­ary tales are well-doc­u­ment­ed. Ama­zon dis­con­tin­ued its AI recruit­ment tool after dis­cov­er­ing it penalised resumes con­tain­ing words like “wom­en’s”. Hire­Vue faced crit­i­cism for facial analy­sis algo­rithms that dis­ad­van­taged neu­ro­di­ver­gent can­di­dates. These fail­ures high­light why the AI Act requires trans­paren­cy in auto­mat­ed hir­ing deci­sions and grants can­di­dates the right to con­test AI-based outcomes.

Orange, the French telecom­mu­ni­ca­tions giant, offers a more promis­ing mod­el. Pro­cess­ing over two mil­lion appli­ca­tions annu­al­ly using AI sys­tems built with Google Cloud, Orange match­es can­di­dates to job descrip­tions while flag­ging results for human val­i­da­tion. By inte­grat­ing fair­ness-aware algo­rithms and com­pre­hen­sive audit pro­ce­dures, the com­pa­ny has improved gen­der diver­si­ty in tech­ni­cal roles. The com­pa­ny’s approach demon­strates how reg­u­la­to­ry require­ments can align with busi­ness objec­tives, where diverse teams often per­form bet­ter, and trans­par­ent hir­ing prac­tices enhance employ­er rep­u­ta­tion6.

The tech­ni­cal imple­men­ta­tion involves mod­u­lar sys­tems that sep­a­rate data pre­pro­cess­ing, scor­ing, and over­sight lay­ers. This archi­tec­ture, guid­ed by frame­works like SMACTR (Sys­tem, Meta­da­ta, Auditabil­i­ty, Con­text, Trace­abil­i­ty, Respon­si­bil­i­ty), enables quick iden­ti­fi­ca­tion and cor­rec­tion of bias issues. Key com­pli­ance strate­gies include using rep­re­sen­ta­tive datasets with min­i­mum 20% minor­i­ty group inclu­sion, log­ging and jus­ti­fy­ing all rank­ing out­comes, allow­ing can­di­date opt-outs, and con­duct­ing reg­u­lar bias audits. Rather than con­strain­ing hir­ing deci­sions, these require­ments are push­ing com­pa­nies toward more equi­table and defen­si­ble recruit­ment prac­tices7.

Securing sensitive medical data

In health­care, where AI sys­tems han­dle sen­si­tive med­ical data and influ­ence patient care deci­sions, the reg­u­la­to­ry stakes reach their high­est point. Health insur­ance claim man­age­ment sys­tems exem­pli­fy this chal­lenge, falling under both the AI Act’s high-risk clas­si­fi­ca­tion and GDPR’s strict med­ical data pro­tec­tions8. Lola Health’s AI-pow­ered Claim Man­age­ment Agen­til­lus­trates how health­care organ­i­sa­tions can nav­i­gate this com­plex reg­u­la­to­ry land­scape. The con­ver­sa­tion­al agent oper­ates with­in Lola Health’s dig­i­tal plat­form, assist­ing mem­bers and insur­ance pro­fes­sion­als around the clock with cov­er­age ques­tions, claim sub­mis­sions, and sta­tus updates.

Com­pli­ance becomes a frame­work for oper­a­tional excel­lence rather than a bureau­crat­ic burden.

The sys­tem’s archi­tec­ture reflects com­pre­hen­sive com­pli­ance think­ing. Back-end inte­gra­tion enables real-time retrieval of per­son­alised con­tract data while secure authen­ti­ca­tion pro­tects sen­si­tive infor­ma­tion. Most impor­tant­ly, the sys­tem main­tains clear esca­la­tion path­ways to human advi­sors for com­plex claims, a require­ment that actu­al­ly improves cus­tomer service.

Han­dling large vol­umes of health data increas­es breach risks, but it also cre­ates oppor­tu­ni­ties for bet­ter patient sup­port. The agent pro­vides 24/7 per­son­alised assis­tance, speeds up case res­o­lu­tion times, and reduces sup­port costs while main­tain­ing high cus­tomer sat­is­fac­tion through clear guid­ance and pri­va­cy assurance.

Risk mit­i­ga­tion strate­gies include explain­able AI for deci­sion trans­paren­cy, strong pri­va­cy safe­guards with authen­ti­cat­ed access and secure encryp­tion, and reg­u­lar audit­ing of chat­bot advice to improve ser­vice qual­i­ty and pre­vent bias. These mea­sures, man­dat­ed by reg­u­la­tion, simul­ta­ne­ous­ly enhance oper­a­tional per­for­mance and user trust. The peri­od­ic reviews required for reg­u­la­to­ry com­pli­ance have an unex­pect­ed ben­e­fit: they con­tin­u­ous­ly improve sys­tem respons­es and main­tain high ser­vice stan­dards. Com­pli­ance becomes a frame­work for oper­a­tional excel­lence rather than a bureau­crat­ic burden.

In finance, fairness in credit decisions

Finan­cial ser­vices rep­re­sent per­haps the most mature exam­ple of AI Act com­pli­ance, where cred­it eval­u­a­tion sys­tems direct­ly influ­ence indi­vid­u­als’ access to finan­cial prod­ucts. These sys­tems must nav­i­gate com­plex require­ments for fair­ness, trans­paren­cy, and account­abil­i­ty while main­tain­ing com­mer­cial via­bil­i­ty. Mod­ern cred­it eval­u­a­tion plat­forms use machine learn­ing to analyse appli­cant data and pre­dict cred­it risk, con­sid­er­ing vari­ables from income and debt his­to­ry to employ­ment sta­tus and trans­ac­tion records. The chal­lenge lies in ensur­ing these sys­tems don’t repli­cate or ampli­fy exist­ing soci­etal bias­es, a require­ment that’s push­ing the entire sec­tor toward more sophis­ti­cat­ed fair­ness testing.

Lead­ing French banks have devel­oped three-lay­er fair­ness test­ing approach­es: pre­pro­cess­ing to bal­ance train­ing data, real-time mon­i­tor­ing to flag demo­graph­ic dis­par­i­ties in approvals, and post-deci­sion cal­i­bra­tion to cor­rect resid­ual bias while main­tain­ing pre­dic­tive per­for­mance. Banks also estab­lish cus­tomer appeal process­es and con­duct reg­u­lar inde­pen­dent audits. Research by Christophe Pérignon at HEC Paris has con­tributed sta­tis­ti­cal frame­works now used by major banks to iden­ti­fy and mit­i­gate dis­crim­i­na­tion in cred­it mod­els. Banks employ­ing these fair­ness-aware sys­tems have reduced approval gaps between demo­graph­ic groups to under 3% while main­tain­ing or improv­ing risk pre­dic­tion accuracy.

Pérignon’s research demon­strates that eth­i­cal com­pli­ance and com­mer­cial objec­tives can align. This align­ment rep­re­sents the AI Act’s broad­er promise: that reg­u­la­to­ry require­ments can dri­ve inno­va­tion toward more effec­tive, trust­wor­thy systems.

Legal perspective on high-risk AI compliance

Solène Gérardin notes that it is rarely black and white to know if an AI sys­tem is “high-risk.” She argues that the best deci­sion amidst this ambi­gu­i­ty is to be proac­tive, by build­ing AI with com­pli­ance from the begin­ning. Clas­si­fi­ca­tion is sim­ple if a sys­tem is on Annex III of the AI Act. For any­thing out­side that list, busi­ness­es must deter­mine if their prod­uct is includ­ed under har­mon­i­sa­tion leg­is­la­tion and requires (3rd par­ty) con­for­mi­ty assess­ment as set forth in Arti­cle 6(1) of the Reg­u­la­tion. She has also indi­cat­ed that the Euro­pean Union has plans to pub­lish detailed guid­ance, includ­ing con­crete exam­ples for bor­der­line cas­es, by the begin­ning of 2026. Once the guid­ance is avail­able, com­pli­ance will be expect­ed across all industries.

The Gen­er­al-Pur­pose AI Code (GP-AI) of Prac­tice was pub­lished last month. Accord­ing to the offi­cial EU-AI act web­site, its pro­vi­sions will take effect on the act from 2 August 20259. It was draft­ed in col­lab­o­ra­tion with near­ly 1000 stake­hold­ers, as an inclu­sive doc­u­ment that trans­lat­ed the act’s gen­er­al pur­pose mod­el require­ments into action­able and prac­ti­cal guid­ance on prin­ci­ples includ­ing but not lim­it­ed to trans­paren­cy, sys­temic risk-mit­i­ga­tion and copy­right com­pli­ance. The code is built to fos­ter val­ues of trust and account­abil­i­ty across Europe’s AI ecosys­tem. It also inter­sects ESG (envi­ron­men­tal and social gov­er­nance) and sus­tain­abil­i­ty goals, mak­ing AI com­pli­ance more than just a legal oblig­a­tion for busi­ness­es. It is a defin­i­tive strat­e­gy to rein­force gov­er­nance and be com­pet­i­tive in the long-term.

Strategic advantage of early compliance

These case stud­ies reveal a com­mon pat­tern: organ­i­sa­tions treat­ing AI Act oblig­a­tions as design prin­ci­ples rather than con­straints achieve supe­ri­or mar­ket posi­tion­ing and oper­a­tional per­for­mance. Ear­ly com­pli­ance offers com­pet­i­tive advan­tages that extend far beyond legal adher­ence. Trans­par­ent AI sys­tems build cus­tomer trust, espe­cial­ly in sen­si­tive sec­tors where deci­sions sig­nif­i­cant­ly impact indi­vid­u­als’ lives. Pro­cure­ment process­es increas­ing­ly favour com­pli­ant ven­dors, cre­at­ing busi­ness oppor­tu­ni­ties for pre­pared orga­ni­za­tions. Access to ESG-con­scious investors improves as com­pli­ance sig­nals robust governance.

The Act’s scope will like­ly expand to cov­er new sec­tors includ­ing trans­porta­tion, ener­gy, and pub­lic admin­is­tra­tion. With tech­ni­cal stan­dards still devel­op­ing and enforce­ment mech­a­nisms tak­ing shape, orga­ni­za­tions face a choice: invest ear­ly in com­pli­ance infra­struc­ture or scram­ble to meet require­ments as dead­lines approach. The EU AI Act trans­forms com­pli­ance from reg­u­la­to­ry bur­den into strate­gic assets. For com­pa­nies nav­i­gat­ing this tran­si­tion, the mes­sage is clear: the future belongs to those who build com­pli­ance into their com­pet­i­tive strat­e­gy from the start.

1Ama­zon scraps secret AI recruit­ing tool that showed bias against women. (2018, Octo­ber 10). Euronews. https://​www​.euronews​.com/​b​u​s​i​n​e​s​s​/​2​0​1​8​/​1​0​/​1​0​/​a​m​a​z​o​n​-​s​c​r​a​p​s​-​s​e​c​r​e​t​-​a​i​-​r​e​c​r​u​i​t​i​n​g​-​t​o​o​l​-​t​h​a​t​-​s​h​o​w​e​d​-​b​i​a​s​-​a​g​a​i​n​s​t​-​women
2D. (2019, Octo­ber 22). A face-scan­ning algo­rithm increas­ing­ly decides whether you deserve the job. The Wash­ing­ton Post. https://​www​.wash​ing​ton​post​.com/​t​e​c​h​n​o​l​o​g​y​/​2​0​1​9​/​1​0​/​2​2​/​a​i​-​h​i​r​i​n​g​-​f​a​c​e​-​s​c​a​n​n​i​n​g​-​a​l​g​o​r​i​t​h​m​-​i​n​c​r​e​a​s​i​n​g​l​y​-​d​e​c​i​d​e​s​-​w​h​e​t​h​e​r​-​y​o​u​-​d​e​s​e​r​v​e​-job/
3Les employeurs face à l’Intelligence Arti­fi­cielle https://www.francetravail.org/files/live/sites/peorg/files/documents/Statistiques-et-analyses/_Documentation/Divers/P%c3%b4le%20emploi_Pr%c3%a9sentation_Enquete%20Intelligence%20Artificielle_2023.pdf
4Regulation—EU – 2024/1689—EN – EUR-Lex. (n.d.). Retrieved August 18, 2025, from https://​eur​-lex​.europa​.eu/​e​l​i​/​r​e​g​/​2​0​2​4​/​1​6​8​9​/​o​j/eng
5Lingueo. (n.d.). Retrieved August 18, 2025, from https://​www​.rhmatin​.com/​f​o​r​m​a​t​i​o​n​/​d​i​g​i​t​a​l​-​l​e​a​r​n​i​n​g​/​e​v​a​l​u​a​t​i​o​n​-​d​e​s​-​l​a​n​g​u​e​s​-​l​i​n​g​u​e​o​-​d​e​v​o​i​l​e​-​l​-​e​-​l​a​t​e​-​s​a​-​p​r​e​m​i​e​r​e​-​b​r​i​q​u​e​-​d​a​n​s​-​l​-​i​a​.html
6Gen­der equal­i­ty in tech­ni­cal roles: Orange com­mits | Orange. (n.d.). Retrieved August 18, 2025, from https://​www​.orange​.com/​e​n​/​n​e​w​s​r​o​o​m​/​n​e​w​s​/​2​0​2​1​/​g​e​n​d​e​r​-​e​q​u​a​l​i​t​y​-​t​e​c​h​n​i​c​a​l​-​r​o​l​e​s​-​o​r​a​n​g​e​-​c​o​mmits
7When algo­rithms come under scruti­ny. (2020, Octo­ber 30). Hel­lo Future. https://​hellofu​ture​.orange​.com/​e​n​/​a​u​d​i​t​i​n​g​-​a​i​-​w​h​e​n​-​a​l​g​o​r​i​t​h​m​s​-​c​o​m​e​-​u​n​d​e​r​-​s​c​r​u​tiny/
8Regulation—EU – 2024/1689—EN – EUR-Lex. (n.d.). Retrieved August 18, 2025, from https://​eur​-lex​.europa​.eu/​e​l​i​/​r​e​g​/​2​0​2​4​/​1​6​8​9​/​o​j/eng
9Euro­pean Com­mis­sion. (2025, Sep­tem­ber 8). The Gen­er­al-Pur­pose AI Code of Prac­tice: https://​dig​i​tal​-strat​e​gy​.ec​.europa​.eu/​e​n​/​p​o​l​i​c​i​e​s​/​c​o​n​t​e​n​t​s​-​c​o​d​e​-gpai

Our world through the lens of science. Every week, in your inbox.

Get the newsletter