Europe digital map, European Union cyberspace network, tech connectivity electronic circuit
Généré par l'IA / Generated using AI
π Digital π Society
How can artificial intelligence be regulated?

AI Act : what are the implications for sensitive sectors in Europe ?

with Jean de Bodinat, Founder and General Manager at Rakam AI and Teacher at Ecole Polytechnique (IP Paris) and Solène Gérardin, Lawyer, AI Act and GDPR Specialist
On October 14th, 2025 |
6 min reading time
Jean de Bodinat_VF
Jean de Bodinat
Founder and General Manager at Rakam AI and Teacher at Ecole Polytechnique (IP Paris)
Solène Gérardin_VF
Solène Gérardin
Lawyer, AI Act and GDPR Specialist
Key takeaways
  • The EU’s Artificial Intelligence Act introduces a European legal framework, a comprehensive instrument to regulate AI use-cases, emphasising risk-based governance across sectors.
  • Businesses in Europe must adhere to new compliance requirements, especially regarding high-risk AI systems concerning health, safety, and fundamental rights.
  • Early integration of AI Act compliance can transform complex legislation into evident strategic advantages: enhanced trust, improved fairness, and competitive positioning.
  • Leading sectors where regulations apply to include but are not limited to, education, recruitment, healthcare, and financial services, transparency and bias reduction are prevalent.
  • Proactive legal and technical involvement can be game changing in recent times, with AI regulations, offering companies a chance to shape the future AI ecosystem responsibly.

When Ama­zon’s AI recruit­ment tool faced scru­ti­ny for sys­te­ma­ti­cal­ly dis­cri­mi­na­ting against women can­di­dates1, or when Hire­Vue’s facial ana­ly­sis algo­rithm sabo­ta­ged neu­ro­di­vergent job see­kers2, the tech world got a rea­li­ty check of what unre­gu­la­ted AI is capable of. Today, Europe is lea­ding the legal way for­ward in the AI legis­la­tion domain, one risk-cri­ti­cal sec­tor at a time. Euro­pe’s Arti­fi­cial Intel­li­gence Act (or « AI Act »), the world’s first com­pre­hen­sive AI regu­la­to­ry fra­me­work, claims to avoid such fai­lures while also trans­for­ming AI deploy­ment in com­pa­nies, espe­cial­ly in risk-cri­ti­cal sectors.

Thriving in the world of AI

The stakes are sub­stan­tial. French enter­prises alone inves­ted well over €1bn in AI tech­no­lo­gies in 2023, 35% of French com­pa­nies acti­ve­ly deploying AI sys­tems accor­ding to Busi­ness France3. The trend is obvious ; indus­try level AI deploy­ment is increa­sing. While the AI Act’s bin­ding obli­ga­tions took effect, only this month4, the fun­da­men­tal choice lies with com­pa­nies, either treat com­pliance as a regu­la­to­ry bur­den on the backs of legal depart­ments or trans­form it into a dis­tinc­tive capa­bi­li­ty, hel­ping you to thrive in the AI world.

“Com­pa­nies that inte­grate legal requi­re­ments as desi­gn prin­ciples can trans­form com­pliance into a stra­te­gic advan­tage,” explains Jean de Bodi­nat, foun­der of Rakam AI and tea­cher at Ecole Poly­tech­nique (IP Paris). The regu­la­tion esta­blishes a risk-based clas­si­fi­ca­tion sys­tem, with “high-risk” AI sys­tems those affec­ting health, safe­ty, or fun­da­men­tal rights facing the stric­test requi­re­ments. Some cri­te­ria that govern the foun­da­tion of these clas­si­fi­ca­tion sys­tems include man­da­to­ry risk mana­ge­ment, data gover­nance, tech­ni­cal docu­men­ta­tion, human over­sight, and qua­li­ty mana­ge­ment sys­tems. Ope­ra­tio­nal per­for­mance and mar­ket posi­tio­ning just got an upgrade, just ano­ther fea­ther in the cap of the EU-AI Act. In the article, we out­line four sec­tors where AI “high-risk” sys­tems are par­ti­cu­lar­ly affec­ted by the legislation.

Education : AI grading transparency

In French class­rooms and online lear­ning plat­forms, AI-dri­ven assess­ment tools are not only trans­for­ming edu­ca­tion but facing a regu­la­to­ry bur­den, now more than ever. Edu­ca­tio­nal AI sys­tems evi­dent­ly fall under the Act’s “high-risk” cate­go­ry due to their direct influence on aca­de­mic per­for­mance. Consi­der Lin­gueo’s e‑LATE plat­form5, spe­cia­li­sing in tai­lo­red lan­guage assess­ments using speech recog­ni­tion and auto­ma­ted sco­ring. Stu­dents receive tar­ge­ted feed­back while tea­chers main­tain over­sight through dedi­ca­ted dash­boards. The sys­tem exem­pli­fies how edu­ca­tio­nal tech­no­lo­gy can meet AI Act requi­re­ments while deli­ve­ring edu­ca­tio­nal value.

Stu­dents and tea­chers need to unders­tand how auto­ma­ted deci­sions are made.

“The chal­lenge isn’t just tech­ni­cal accu­ra­cy, it’s about fair­ness and trans­pa­ren­cy,” notes Solène Gérar­din, a lawyer and AI Act spe­cia­list who advises busi­nesses on com­pliance. “Stu­dents and edu­ca­tors need to unders­tand how auto­ma­ted deci­sions are made.” The plat­form addresses this by sepa­ra­ting AI content gene­ra­tion from eva­lua­tion pipe­lines, imple­men­ting robust content fil­ters, and pro­vi­ding clear inter­faces for edu­ca­tor over­sight. Most cri­ti­cal­ly, it main­tains com­pre­hen­sive log­ging for audi­ta­bi­li­ty, a requi­re­ment that is beco­ming stan­dar­di­sed across edu­ca­tio­nal technology.

Exter­nal examples rein­force this approach. Plat­forms like Gra­des­cope and Knew­ton are adop­ting explai­nable AI solu­tions that help both stu­dents and tea­chers unders­tand auto­ma­ted gra­ding deci­sions. Their suc­cess demons­trates that trans­pa­ren­cy requi­re­ments can actual­ly improve edu­ca­tio­nal out­comes by buil­ding trust bet­ween lear­ners, edu­ca­tors, and AI systems.

Removing hiring biases

Per­haps now­here is the AI Act’s impact more visible than in recruit­ment, where auto­ma­ted can­di­date eva­lua­tion sys­tems are trans­for­ming and some­times dis­tor­ting hiring prac­tices. These sys­tems, which screen resumes and rank appli­cants using natu­ral lan­guage pro­ces­sing, represent a text­book example of high-risk AI under the new regu­la­tion. The cau­tio­na­ry tales are well-docu­men­ted. Ama­zon dis­con­ti­nued its AI recruit­ment tool after dis­co­ve­ring it pena­li­sed resumes contai­ning words like “women’s”. Hire­Vue faced cri­ti­cism for facial ana­ly­sis algo­rithms that disad­van­ta­ged neu­ro­di­vergent can­di­dates. These fai­lures high­light why the AI Act requires trans­pa­ren­cy in auto­ma­ted hiring deci­sions and grants can­di­dates the right to contest AI-based outcomes.

Orange, the French tele­com­mu­ni­ca­tions giant, offers a more pro­mi­sing model. Pro­ces­sing over two mil­lion appli­ca­tions annual­ly using AI sys­tems built with Google Cloud, Orange matches can­di­dates to job des­crip­tions while flag­ging results for human vali­da­tion. By inte­gra­ting fair­ness-aware algo­rithms and com­pre­hen­sive audit pro­ce­dures, the com­pa­ny has impro­ved gen­der diver­si­ty in tech­ni­cal roles. The com­pa­ny’s approach demons­trates how regu­la­to­ry requi­re­ments can ali­gn with busi­ness objec­tives, where diverse teams often per­form bet­ter, and trans­pa­rent hiring prac­tices enhance employer repu­ta­tion6.

The tech­ni­cal imple­men­ta­tion involves modu­lar sys­tems that sepa­rate data pre­pro­ces­sing, sco­ring, and over­sight layers. This archi­tec­ture, gui­ded by fra­me­works like SMACTR (Sys­tem, Meta­da­ta, Audi­ta­bi­li­ty, Context, Tra­cea­bi­li­ty, Res­pon­si­bi­li­ty), enables quick iden­ti­fi­ca­tion and cor­rec­tion of bias issues. Key com­pliance stra­te­gies include using repre­sen­ta­tive data­sets with mini­mum 20% mino­ri­ty group inclu­sion, log­ging and jus­ti­fying all ran­king out­comes, allo­wing can­di­date opt-outs, and conduc­ting regu­lar bias audits. Rather than constrai­ning hiring deci­sions, these requi­re­ments are pushing com­pa­nies toward more equi­table and defen­sible recruit­ment prac­tices7.

Securing sensitive medical data

In heal­th­care, where AI sys­tems handle sen­si­tive medi­cal data and influence patient care deci­sions, the regu­la­to­ry stakes reach their highest point. Health insu­rance claim mana­ge­ment sys­tems exem­pli­fy this chal­lenge, fal­ling under both the AI Act’s high-risk clas­si­fi­ca­tion and GDPR’s strict medi­cal data pro­tec­tions8. Lola Heal­th’s AI-powe­red Claim Mana­ge­ment Agen­tillus­trates how heal­th­care orga­ni­sa­tions can navi­gate this com­plex regu­la­to­ry land­scape. The conver­sa­tio­nal agent ope­rates within Lola Heal­th’s digi­tal plat­form, assis­ting mem­bers and insu­rance pro­fes­sio­nals around the clock with cove­rage ques­tions, claim sub­mis­sions, and sta­tus updates.

Com­pliance becomes a fra­me­work for ope­ra­tio­nal excel­lence rather than a bureau­cra­tic burden.

The sys­tem’s archi­tec­ture reflects com­pre­hen­sive com­pliance thin­king. Back-end inte­gra­tion enables real-time retrie­val of per­so­na­li­sed contract data while secure authen­ti­ca­tion pro­tects sen­si­tive infor­ma­tion. Most impor­tant­ly, the sys­tem main­tains clear esca­la­tion path­ways to human advi­sors for com­plex claims, a requi­re­ment that actual­ly improves cus­to­mer service.

Hand­ling large volumes of health data increases breach risks, but it also creates oppor­tu­ni­ties for bet­ter patient sup­port. The agent pro­vides 24/7 per­so­na­li­sed assis­tance, speeds up case reso­lu­tion times, and reduces sup­port costs while main­tai­ning high cus­to­mer satis­fac­tion through clear gui­dance and pri­va­cy assurance.

Risk miti­ga­tion stra­te­gies include explai­nable AI for deci­sion trans­pa­ren­cy, strong pri­va­cy safe­guards with authen­ti­ca­ted access and secure encryp­tion, and regu­lar audi­ting of chat­bot advice to improve ser­vice qua­li­ty and prevent bias. These mea­sures, man­da­ted by regu­la­tion, simul­ta­neous­ly enhance ope­ra­tio­nal per­for­mance and user trust. The per­io­dic reviews requi­red for regu­la­to­ry com­pliance have an unex­pec­ted bene­fit : they conti­nuous­ly improve sys­tem res­ponses and main­tain high ser­vice stan­dards. Com­pliance becomes a fra­me­work for ope­ra­tio­nal excel­lence rather than a bureau­cra­tic burden.

In finance, fairness in credit decisions

Finan­cial ser­vices represent per­haps the most mature example of AI Act com­pliance, where cre­dit eva­lua­tion sys­tems direct­ly influence indi­vi­duals’ access to finan­cial pro­ducts. These sys­tems must navi­gate com­plex requi­re­ments for fair­ness, trans­pa­ren­cy, and accoun­ta­bi­li­ty while main­tai­ning com­mer­cial via­bi­li­ty. Modern cre­dit eva­lua­tion plat­forms use machine lear­ning to ana­lyse appli­cant data and pre­dict cre­dit risk, consi­de­ring variables from income and debt his­to­ry to employ­ment sta­tus and tran­sac­tion records. The chal­lenge lies in ensu­ring these sys­tems don’t repli­cate or ampli­fy exis­ting socie­tal biases, a requi­re­ment that’s pushing the entire sec­tor toward more sophis­ti­ca­ted fair­ness testing.

Lea­ding French banks have deve­lo­ped three-layer fair­ness tes­ting approaches : pre­pro­ces­sing to balance trai­ning data, real-time moni­to­ring to flag demo­gra­phic dis­pa­ri­ties in appro­vals, and post-deci­sion cali­bra­tion to cor­rect resi­dual bias while main­tai­ning pre­dic­tive per­for­mance. Banks also esta­blish cus­to­mer appeal pro­cesses and conduct regu­lar inde­pendent audits. Research by Chris­tophe Péri­gnon at HEC Paris has contri­bu­ted sta­tis­ti­cal fra­me­works now used by major banks to iden­ti­fy and miti­gate dis­cri­mi­na­tion in cre­dit models. Banks employing these fair­ness-aware sys­tems have redu­ced appro­val gaps bet­ween demo­gra­phic groups to under 3% while main­tai­ning or impro­ving risk pre­dic­tion accuracy.

Péri­gnon’s research demons­trates that ethi­cal com­pliance and com­mer­cial objec­tives can ali­gn. This ali­gn­ment repre­sents the AI Act’s broa­der pro­mise : that regu­la­to­ry requi­re­ments can drive inno­va­tion toward more effec­tive, trust­wor­thy systems.

Legal perspective on high-risk AI compliance

Solène Gérar­din notes that it is rare­ly black and white to know if an AI sys­tem is “high-risk.” She argues that the best deci­sion amid­st this ambi­gui­ty is to be proac­tive, by buil­ding AI with com­pliance from the begin­ning. Clas­si­fi­ca­tion is simple if a sys­tem is on Annex III of the AI Act. For any­thing out­side that list, busi­nesses must deter­mine if their pro­duct is inclu­ded under har­mo­ni­sa­tion legis­la­tion and requires (3rd par­ty) confor­mi­ty assess­ment as set forth in Article 6(1) of the Regu­la­tion. She has also indi­ca­ted that the Euro­pean Union has plans to publish detai­led gui­dance, inclu­ding concrete examples for bor­der­line cases, by the begin­ning of 2026. Once the gui­dance is avai­lable, com­pliance will be expec­ted across all industries.

The Gene­ral-Pur­pose AI Code (GP-AI) of Prac­tice was publi­shed last month. Accor­ding to the offi­cial EU-AI act web­site, its pro­vi­sions will take effect on the act from 2 August 20259. It was draf­ted in col­la­bo­ra­tion with near­ly 1000 sta­ke­hol­ders, as an inclu­sive docu­ment that trans­la­ted the act’s gene­ral pur­pose model requi­re­ments into actio­nable and prac­ti­cal gui­dance on prin­ciples inclu­ding but not limi­ted to trans­pa­ren­cy, sys­te­mic risk-miti­ga­tion and copy­right com­pliance. The code is built to fos­ter values of trust and accoun­ta­bi­li­ty across Europe’s AI eco­sys­tem. It also inter­sects ESG (envi­ron­men­tal and social gover­nance) and sus­tai­na­bi­li­ty goals, making AI com­pliance more than just a legal obli­ga­tion for busi­nesses. It is a defi­ni­tive stra­te­gy to rein­force gover­nance and be com­pe­ti­tive in the long-term.

Strategic advantage of early compliance

These case stu­dies reveal a com­mon pat­tern : orga­ni­sa­tions trea­ting AI Act obli­ga­tions as desi­gn prin­ciples rather than constraints achieve super­ior mar­ket posi­tio­ning and ope­ra­tio­nal per­for­mance. Ear­ly com­pliance offers com­pe­ti­tive advan­tages that extend far beyond legal adhe­rence. Trans­pa­rent AI sys­tems build cus­to­mer trust, espe­cial­ly in sen­si­tive sec­tors where deci­sions signi­fi­cant­ly impact indi­vi­duals’ lives. Pro­cu­re­ment pro­cesses increa­sin­gly favour com­pliant ven­dors, crea­ting busi­ness oppor­tu­ni­ties for pre­pa­red orga­ni­za­tions. Access to ESG-conscious inves­tors improves as com­pliance signals robust governance.

The Act’s scope will like­ly expand to cover new sec­tors inclu­ding trans­por­ta­tion, ener­gy, and public admi­nis­tra­tion. With tech­ni­cal stan­dards still deve­lo­ping and enfor­ce­ment mecha­nisms taking shape, orga­ni­za­tions face a choice : invest ear­ly in com­pliance infra­struc­ture or scramble to meet requi­re­ments as dead­lines approach. The EU AI Act trans­forms com­pliance from regu­la­to­ry bur­den into stra­te­gic assets. For com­pa­nies navi­ga­ting this tran­si­tion, the mes­sage is clear : the future belongs to those who build com­pliance into their com­pe­ti­tive stra­te­gy from the start.

1Ama­zon scraps secret AI recrui­ting tool that sho­wed bias against women. (2018, Octo­ber 10). Euro­news. https://​www​.euro​news​.com/​b​u​s​i​n​e​s​s​/​2​0​1​8​/​1​0​/​1​0​/​a​m​a​z​o​n​-​s​c​r​a​p​s​-​s​e​c​r​e​t​-​a​i​-​r​e​c​r​u​i​t​i​n​g​-​t​o​o​l​-​t​h​a​t​-​s​h​o​w​e​d​-​b​i​a​s​-​a​g​a​i​n​s​t​-​women
2D. (2019, Octo­ber 22). A face-scan­ning algo­rithm increa­sin­gly decides whe­ther you deserve the job. The Washing­ton Post. https://​www​.washing​ton​post​.com/​t​e​c​h​n​o​l​o​g​y​/​2​0​1​9​/​1​0​/​2​2​/​a​i​-​h​i​r​i​n​g​-​f​a​c​e​-​s​c​a​n​n​i​n​g​-​a​l​g​o​r​i​t​h​m​-​i​n​c​r​e​a​s​i​n​g​l​y​-​d​e​c​i​d​e​s​-​w​h​e​t​h​e​r​-​y​o​u​-​d​e​s​e​r​v​e​-job/
3Les employeurs face à l’Intelligence Arti­fi­cielle https://www.francetravail.org/files/live/sites/peorg/files/documents/Statistiques-et-analyses/_Documentation/Divers/P%c3%b4le%20emploi_Pr%c3%a9sentation_Enquete%20Intelligence%20Artificielle_2023.pdf
4Regulation—EU – 2024/1689—EN – EUR-Lex. (n.d.). Retrie­ved August 18, 2025, from https://​eur​-lex​.euro​pa​.eu/​e​l​i​/​r​e​g​/​2​0​2​4​/​1​6​8​9​/​o​j/eng
5Lin­gueo. (n.d.). Retrie­ved August 18, 2025, from https://​www​.rhma​tin​.com/​f​o​r​m​a​t​i​o​n​/​d​i​g​i​t​a​l​-​l​e​a​r​n​i​n​g​/​e​v​a​l​u​a​t​i​o​n​-​d​e​s​-​l​a​n​g​u​e​s​-​l​i​n​g​u​e​o​-​d​e​v​o​i​l​e​-​l​-​e​-​l​a​t​e​-​s​a​-​p​r​e​m​i​e​r​e​-​b​r​i​q​u​e​-​d​a​n​s​-​l​-​i​a​.html
6Gen­der equa­li­ty in tech­ni­cal roles : Orange com­mits | Orange. (n.d.). Retrie­ved August 18, 2025, from https://​www​.orange​.com/​e​n​/​n​e​w​s​r​o​o​m​/​n​e​w​s​/​2​0​2​1​/​g​e​n​d​e​r​-​e​q​u​a​l​i​t​y​-​t​e​c​h​n​i​c​a​l​-​r​o​l​e​s​-​o​r​a​n​g​e​-​c​o​mmits
7When algo­rithms come under scru­ti­ny. (2020, Octo­ber 30). Hel­lo Future. https://​hel​lo​fu​ture​.orange​.com/​e​n​/​a​u​d​i​t​i​n​g​-​a​i​-​w​h​e​n​-​a​l​g​o​r​i​t​h​m​s​-​c​o​m​e​-​u​n​d​e​r​-​s​c​r​u​tiny/
8Regulation—EU – 2024/1689—EN – EUR-Lex. (n.d.). Retrie­ved August 18, 2025, from https://​eur​-lex​.euro​pa​.eu/​e​l​i​/​r​e​g​/​2​0​2​4​/​1​6​8​9​/​o​j/eng
9Euro­pean Com­mis­sion. (2025, Sep­tem­ber 8). The Gene­ral-Pur­pose AI Code of Prac­tice : https://​digi​tal​-stra​te​gy​.ec​.euro​pa​.eu/​e​n​/​p​o​l​i​c​i​e​s​/​c​o​n​t​e​n​t​s​-​c​o​d​e​-gpai

Support accurate information rooted in the scientific method.

Donate