Europe digital map, European Union cyberspace network, tech connectivity electronic circuit
Généré par l'IA / Generated using AI
π Digital π Society
How can artificial intelligence be regulated?

AI Act: what are the implications for sensitive sectors in Europe?

with Jean de Bodinat, Founder and General Manager at Rakam AI and Teacher at Ecole Polytechnique (IP Paris) and Solène Gérardin, Lawyer, AI Act and GDPR Specialist
On October 14th, 2025 |
6 min reading time
Jean de Bodinat_VF
Jean de Bodinat
Founder and General Manager at Rakam AI and Teacher at Ecole Polytechnique (IP Paris)
Solène Gérardin_VF
Solène Gérardin
Lawyer, AI Act and GDPR Specialist
Key takeaways
  • The EU’s Artificial Intelligence Act introduces a European legal framework, a comprehensive instrument to regulate AI use-cases, emphasising risk-based governance across sectors.
  • Businesses in Europe must adhere to new compliance requirements, especially regarding high-risk AI systems concerning health, safety, and fundamental rights.
  • Early integration of AI Act compliance can transform complex legislation into evident strategic advantages: enhanced trust, improved fairness, and competitive positioning.
  • Leading sectors where regulations apply to include but are not limited to, education, recruitment, healthcare, and financial services, transparency and bias reduction are prevalent.
  • Proactive legal and technical involvement can be game changing in recent times, with AI regulations, offering companies a chance to shape the future AI ecosystem responsibly.

When Amazon’s AI recruit­ment tool faced scru­tiny for sys­tem­at­ic­ally dis­crim­in­at­ing against women can­did­ates1, or when Hire­Vue’s facial ana­lys­is algorithm sab­ot­aged neurodi­ver­gent job seekers2, the tech world got a real­ity check of what unreg­u­lated AI is cap­able of. Today, Europe is lead­ing the leg­al way for­ward in the AI legis­la­tion domain, one risk-crit­ic­al sec­tor at a time. Europe’s Arti­fi­cial Intel­li­gence Act (or « AI Act »), the world’s first com­pre­hens­ive AI reg­u­lat­ory frame­work, claims to avoid such fail­ures while also trans­form­ing AI deploy­ment in com­pan­ies, espe­cially in risk-crit­ic­al sectors.

Thriving in the world of AI

The stakes are sub­stan­tial. French enter­prises alone inves­ted well over €1bn in AI tech­no­lo­gies in 2023, 35% of French com­pan­ies act­ively deploy­ing AI sys­tems accord­ing to Busi­ness France3. The trend is obvi­ous; industry level AI deploy­ment is increas­ing. While the AI Act’s bind­ing oblig­a­tions took effect, only this month4, the fun­da­ment­al choice lies with com­pan­ies, either treat com­pli­ance as a reg­u­lat­ory bur­den on the backs of leg­al depart­ments or trans­form it into a dis­tinct­ive cap­ab­il­ity, help­ing you to thrive in the AI world.

“Com­pan­ies that integ­rate leg­al require­ments as design prin­ciples can trans­form com­pli­ance into a stra­tegic advant­age,” explains Jean de Bod­in­at, founder of Rakam AI and teach­er at Ecole Poly­tech­nique (IP Par­is). The reg­u­la­tion estab­lishes a risk-based clas­si­fic­a­tion sys­tem, with “high-risk” AI sys­tems those affect­ing health, safety, or fun­da­ment­al rights facing the strict­est require­ments. Some cri­ter­ia that gov­ern the found­a­tion of these clas­si­fic­a­tion sys­tems include man­dat­ory risk man­age­ment, data gov­ernance, tech­nic­al doc­u­ment­a­tion, human over­sight, and qual­ity man­age­ment sys­tems. Oper­a­tion­al per­form­ance and mar­ket pos­i­tion­ing just got an upgrade, just anoth­er feath­er in the cap of the EU-AI Act. In the art­icle, we out­line four sec­tors where AI “high-risk” sys­tems are par­tic­u­larly affected by the legislation.

Education: AI grading transparency

In French classrooms and online learn­ing plat­forms, AI-driv­en assess­ment tools are not only trans­form­ing edu­ca­tion but facing a reg­u­lat­ory bur­den, now more than ever. Edu­ca­tion­al AI sys­tems evid­ently fall under the Act’s “high-risk” cat­egory due to their dir­ect influ­ence on aca­dem­ic per­form­ance. Con­sider Lingueo’s e‑LATE plat­form5, spe­cial­ising in tailored lan­guage assess­ments using speech recog­ni­tion and auto­mated scor­ing. Stu­dents receive tar­geted feed­back while teach­ers main­tain over­sight through ded­ic­ated dash­boards. The sys­tem exem­pli­fies how edu­ca­tion­al tech­no­logy can meet AI Act require­ments while deliv­er­ing edu­ca­tion­al value.

Stu­dents and teach­ers need to under­stand how auto­mated decisions are made.

“The chal­lenge isn’t just tech­nic­al accur­acy, it’s about fair­ness and trans­par­ency,” notes Solène Gérardin, a law­yer and AI Act spe­cial­ist who advises busi­nesses on com­pli­ance. “Stu­dents and edu­cat­ors need to under­stand how auto­mated decisions are made.” The plat­form addresses this by sep­ar­at­ing AI con­tent gen­er­a­tion from eval­u­ation pipelines, imple­ment­ing robust con­tent fil­ters, and provid­ing clear inter­faces for edu­cat­or over­sight. Most crit­ic­ally, it main­tains com­pre­hens­ive log­ging for audit­ab­il­ity, a require­ment that is becom­ing stand­ard­ised across edu­ca­tion­al technology.

Extern­al examples rein­force this approach. Plat­forms like Gra­de­scope and Knew­ton are adopt­ing explain­able AI solu­tions that help both stu­dents and teach­ers under­stand auto­mated grad­ing decisions. Their suc­cess demon­strates that trans­par­ency require­ments can actu­ally improve edu­ca­tion­al out­comes by build­ing trust between learners, edu­cat­ors, and AI systems.

Removing hiring biases

Per­haps nowhere is the AI Act’s impact more vis­ible than in recruit­ment, where auto­mated can­did­ate eval­u­ation sys­tems are trans­form­ing and some­times dis­tort­ing hir­ing prac­tices. These sys­tems, which screen resumes and rank applic­ants using nat­ur­al lan­guage pro­cessing, rep­res­ent a text­book example of high-risk AI under the new reg­u­la­tion. The cau­tion­ary tales are well-doc­u­mented. Amazon dis­con­tin­ued its AI recruit­ment tool after dis­cov­er­ing it pen­al­ised resumes con­tain­ing words like “women’s”. Hire­Vue faced cri­ti­cism for facial ana­lys­is algorithms that dis­ad­vant­aged neurodi­ver­gent can­did­ates. These fail­ures high­light why the AI Act requires trans­par­ency in auto­mated hir­ing decisions and grants can­did­ates the right to con­test AI-based outcomes.

Orange, the French tele­com­mu­nic­a­tions giant, offers a more prom­ising mod­el. Pro­cessing over two mil­lion applic­a­tions annu­ally using AI sys­tems built with Google Cloud, Orange matches can­did­ates to job descrip­tions while flag­ging res­ults for human val­id­a­tion. By integ­rat­ing fair­ness-aware algorithms and com­pre­hens­ive audit pro­ced­ures, the com­pany has improved gender diversity in tech­nic­al roles. The com­pany’s approach demon­strates how reg­u­lat­ory require­ments can align with busi­ness object­ives, where diverse teams often per­form bet­ter, and trans­par­ent hir­ing prac­tices enhance employ­er repu­ta­tion6.

The tech­nic­al imple­ment­a­tion involves mod­u­lar sys­tems that sep­ar­ate data pre­pro­cessing, scor­ing, and over­sight lay­ers. This archi­tec­ture, guided by frame­works like SMACTR (Sys­tem, Metadata, Audit­ab­il­ity, Con­text, Trace­ab­il­ity, Respons­ib­il­ity), enables quick iden­ti­fic­a­tion and cor­rec­tion of bias issues. Key com­pli­ance strategies include using rep­res­ent­at­ive data­sets with min­im­um 20% minor­ity group inclu­sion, log­ging and jus­ti­fy­ing all rank­ing out­comes, allow­ing can­did­ate opt-outs, and con­duct­ing reg­u­lar bias audits. Rather than con­strain­ing hir­ing decisions, these require­ments are push­ing com­pan­ies toward more equit­able and defens­ible recruit­ment prac­tices7.

Securing sensitive medical data

In health­care, where AI sys­tems handle sens­it­ive med­ic­al data and influ­ence patient care decisions, the reg­u­lat­ory stakes reach their highest point. Health insur­ance claim man­age­ment sys­tems exem­pli­fy this chal­lenge, fall­ing under both the AI Act’s high-risk clas­si­fic­a­tion and GDPR’s strict med­ic­al data pro­tec­tions8. Lola Health’s AI-powered Claim Man­age­ment Agen­til­lus­trates how health­care organ­isa­tions can nav­ig­ate this com­plex reg­u­lat­ory land­scape. The con­ver­sa­tion­al agent oper­ates with­in Lola Health’s digit­al plat­form, assist­ing mem­bers and insur­ance pro­fes­sion­als around the clock with cov­er­age ques­tions, claim sub­mis­sions, and status updates.

Com­pli­ance becomes a frame­work for oper­a­tion­al excel­lence rather than a bur­eau­crat­ic burden.

The sys­tem’s archi­tec­ture reflects com­pre­hens­ive com­pli­ance think­ing. Back-end integ­ra­tion enables real-time retriev­al of per­son­al­ised con­tract data while secure authen­tic­a­tion pro­tects sens­it­ive inform­a­tion. Most import­antly, the sys­tem main­tains clear escal­a­tion path­ways to human advisors for com­plex claims, a require­ment that actu­ally improves cus­tom­er service.

Hand­ling large volumes of health data increases breach risks, but it also cre­ates oppor­tun­it­ies for bet­ter patient sup­port. The agent provides 24/7 per­son­al­ised assist­ance, speeds up case res­ol­u­tion times, and reduces sup­port costs while main­tain­ing high cus­tom­er sat­is­fac­tion through clear guid­ance and pri­vacy assurance.

Risk mit­ig­a­tion strategies include explain­able AI for decision trans­par­ency, strong pri­vacy safe­guards with authen­tic­ated access and secure encryp­tion, and reg­u­lar audit­ing of chat­bot advice to improve ser­vice qual­ity and pre­vent bias. These meas­ures, man­dated by reg­u­la­tion, sim­ul­tan­eously enhance oper­a­tion­al per­form­ance and user trust. The peri­od­ic reviews required for reg­u­lat­ory com­pli­ance have an unex­pec­ted bene­fit: they con­tinu­ously improve sys­tem responses and main­tain high ser­vice stand­ards. Com­pli­ance becomes a frame­work for oper­a­tion­al excel­lence rather than a bur­eau­crat­ic burden.

In finance, fairness in credit decisions

Fin­an­cial ser­vices rep­res­ent per­haps the most mature example of AI Act com­pli­ance, where cred­it eval­u­ation sys­tems dir­ectly influ­ence indi­vidu­als’ access to fin­an­cial products. These sys­tems must nav­ig­ate com­plex require­ments for fair­ness, trans­par­ency, and account­ab­il­ity while main­tain­ing com­mer­cial viab­il­ity. Mod­ern cred­it eval­u­ation plat­forms use machine learn­ing to ana­lyse applic­ant data and pre­dict cred­it risk, con­sid­er­ing vari­ables from income and debt his­tory to employ­ment status and trans­ac­tion records. The chal­lenge lies in ensur­ing these sys­tems don’t rep­lic­ate or amp­li­fy exist­ing soci­et­al biases, a require­ment that’s push­ing the entire sec­tor toward more soph­ist­ic­ated fair­ness testing.

Lead­ing French banks have developed three-lay­er fair­ness test­ing approaches: pre­pro­cessing to bal­ance train­ing data, real-time mon­it­or­ing to flag demo­graph­ic dis­par­it­ies in approvals, and post-decision cal­ib­ra­tion to cor­rect resid­ual bias while main­tain­ing pre­dict­ive per­form­ance. Banks also estab­lish cus­tom­er appeal pro­cesses and con­duct reg­u­lar inde­pend­ent audits. Research by Chris­tophe Pérignon at HEC Par­is has con­trib­uted stat­ist­ic­al frame­works now used by major banks to identi­fy and mit­ig­ate dis­crim­in­a­tion in cred­it mod­els. Banks employ­ing these fair­ness-aware sys­tems have reduced approv­al gaps between demo­graph­ic groups to under 3% while main­tain­ing or improv­ing risk pre­dic­tion accuracy.

Pérignon’s research demon­strates that eth­ic­al com­pli­ance and com­mer­cial object­ives can align. This align­ment rep­res­ents the AI Act’s broad­er prom­ise: that reg­u­lat­ory require­ments can drive innov­a­tion toward more effect­ive, trust­worthy systems.

Legal perspective on high-risk AI compliance

Solène Gérardin notes that it is rarely black and white to know if an AI sys­tem is “high-risk.” She argues that the best decision amidst this ambi­gu­ity is to be pro­act­ive, by build­ing AI with com­pli­ance from the begin­ning. Clas­si­fic­a­tion is simple if a sys­tem is on Annex III of the AI Act. For any­thing out­side that list, busi­nesses must determ­ine if their product is included under har­mon­isa­tion legis­la­tion and requires (3rd party) con­form­ity assess­ment as set forth in Art­icle 6(1) of the Reg­u­la­tion. She has also indic­ated that the European Uni­on has plans to pub­lish detailed guid­ance, includ­ing con­crete examples for bor­der­line cases, by the begin­ning of 2026. Once the guid­ance is avail­able, com­pli­ance will be expec­ted across all industries.

The Gen­er­al-Pur­pose AI Code (GP-AI) of Prac­tice was pub­lished last month. Accord­ing to the offi­cial EU-AI act web­site, its pro­vi­sions will take effect on the act from 2 August 20259. It was draf­ted in col­lab­or­a­tion with nearly 1000 stake­hold­ers, as an inclus­ive doc­u­ment that trans­lated the act’s gen­er­al pur­pose mod­el require­ments into action­able and prac­tic­al guid­ance on prin­ciples includ­ing but not lim­ited to trans­par­ency, sys­tem­ic risk-mit­ig­a­tion and copy­right com­pli­ance. The code is built to foster val­ues of trust and account­ab­il­ity across Europe’s AI eco­sys­tem. It also inter­sects ESG (envir­on­ment­al and social gov­ernance) and sus­tain­ab­il­ity goals, mak­ing AI com­pli­ance more than just a leg­al oblig­a­tion for busi­nesses. It is a defin­it­ive strategy to rein­force gov­ernance and be com­pet­it­ive in the long-term.

Strategic advantage of early compliance

These case stud­ies reveal a com­mon pat­tern: organ­isa­tions treat­ing AI Act oblig­a­tions as design prin­ciples rather than con­straints achieve super­i­or mar­ket pos­i­tion­ing and oper­a­tion­al per­form­ance. Early com­pli­ance offers com­pet­it­ive advant­ages that extend far bey­ond leg­al adher­ence. Trans­par­ent AI sys­tems build cus­tom­er trust, espe­cially in sens­it­ive sec­tors where decisions sig­ni­fic­antly impact indi­vidu­als’ lives. Pro­cure­ment pro­cesses increas­ingly favour com­pli­ant vendors, cre­at­ing busi­ness oppor­tun­it­ies for pre­pared organ­iz­a­tions. Access to ESG-con­scious investors improves as com­pli­ance sig­nals robust governance.

The Act’s scope will likely expand to cov­er new sec­tors includ­ing trans­port­a­tion, energy, and pub­lic admin­is­tra­tion. With tech­nic­al stand­ards still devel­op­ing and enforce­ment mech­an­isms tak­ing shape, organ­iz­a­tions face a choice: invest early in com­pli­ance infra­struc­ture or scramble to meet require­ments as dead­lines approach. The EU AI Act trans­forms com­pli­ance from reg­u­lat­ory bur­den into stra­tegic assets. For com­pan­ies nav­ig­at­ing this trans­ition, the mes­sage is clear: the future belongs to those who build com­pli­ance into their com­pet­it­ive strategy from the start.

1Amazon scraps secret AI recruit­ing tool that showed bias against women. (2018, Octo­ber 10). Euronews. https://​www​.euronews​.com/​b​u​s​i​n​e​s​s​/​2​0​1​8​/​1​0​/​1​0​/​a​m​a​z​o​n​-​s​c​r​a​p​s​-​s​e​c​r​e​t​-​a​i​-​r​e​c​r​u​i​t​i​n​g​-​t​o​o​l​-​t​h​a​t​-​s​h​o​w​e​d​-​b​i​a​s​-​a​g​a​i​n​s​t​-​women
2D. (2019, Octo­ber 22). A face-scan­ning algorithm increas­ingly decides wheth­er you deserve the job. The Wash­ing­ton Post. https://​www​.wash​ing​ton​post​.com/​t​e​c​h​n​o​l​o​g​y​/​2​0​1​9​/​1​0​/​2​2​/​a​i​-​h​i​r​i​n​g​-​f​a​c​e​-​s​c​a​n​n​i​n​g​-​a​l​g​o​r​i​t​h​m​-​i​n​c​r​e​a​s​i​n​g​l​y​-​d​e​c​i​d​e​s​-​w​h​e​t​h​e​r​-​y​o​u​-​d​e​s​e​r​v​e​-job/
3Les employeurs face à l’Intelligence Arti­fi­ci­elle https://www.francetravail.org/files/live/sites/peorg/files/documents/Statistiques-et-analyses/_Documentation/Divers/P%c3%b4le%20emploi_Pr%c3%a9sentation_Enquete%20Intelligence%20Artificielle_2023.pdf
4Regulation—EU – 2024/1689—EN – EUR-Lex. (n.d.). Retrieved August 18, 2025, from https://​eur​-lex​.europa​.eu/​e​l​i​/​r​e​g​/​2​0​2​4​/​1​6​8​9​/​o​j/eng
5Lingueo. (n.d.). Retrieved August 18, 2025, from https://​www​.rhmat​in​.com/​f​o​r​m​a​t​i​o​n​/​d​i​g​i​t​a​l​-​l​e​a​r​n​i​n​g​/​e​v​a​l​u​a​t​i​o​n​-​d​e​s​-​l​a​n​g​u​e​s​-​l​i​n​g​u​e​o​-​d​e​v​o​i​l​e​-​l​-​e​-​l​a​t​e​-​s​a​-​p​r​e​m​i​e​r​e​-​b​r​i​q​u​e​-​d​a​n​s​-​l​-​i​a​.html
6Gender equal­ity in tech­nic­al roles: Orange com­mits | Orange. (n.d.). Retrieved August 18, 2025, from https://​www​.orange​.com/​e​n​/​n​e​w​s​r​o​o​m​/​n​e​w​s​/​2​0​2​1​/​g​e​n​d​e​r​-​e​q​u​a​l​i​t​y​-​t​e​c​h​n​i​c​a​l​-​r​o​l​e​s​-​o​r​a​n​g​e​-​c​o​mmits
7When algorithms come under scru­tiny. (2020, Octo­ber 30). Hello Future. https://​hel​lo​fu​ture​.orange​.com/​e​n​/​a​u​d​i​t​i​n​g​-​a​i​-​w​h​e​n​-​a​l​g​o​r​i​t​h​m​s​-​c​o​m​e​-​u​n​d​e​r​-​s​c​r​u​tiny/
8Regulation—EU – 2024/1689—EN – EUR-Lex. (n.d.). Retrieved August 18, 2025, from https://​eur​-lex​.europa​.eu/​e​l​i​/​r​e​g​/​2​0​2​4​/​1​6​8​9​/​o​j/eng
9European Com­mis­sion. (2025, Septem­ber 8). The Gen­er­al-Pur­pose AI Code of Prac­tice: https://​digit​al​-strategy​.ec​.europa​.eu/​e​n​/​p​o​l​i​c​i​e​s​/​c​o​n​t​e​n​t​s​-​c​o​d​e​-gpai

Support accurate information rooted in the scientific method.

Donate