π Health and biotech π Science and technology
Digital innovations for better health

Why the medical AI revolution could never happen

with Joël Perez Torrents, PhD student at I³-CRG* at École Polytechnique (IP Paris)
On November 16th, 2022 |
5 min reading time
PEREZ TORRENTS Joël
Joël Perez Torrents
PhD student at I³-CRG* at École Polytechnique (IP Paris)
Key takeaways
  • Artificial intelligence (AI) for medical applications has the potential to profoundly change healthcare practices in the long-term: diagnostics, treatment, and patient experience.
  • But, until now, most developments in AI have followed a continuity of medical efforts rather than completely overturning pre-existing methods.
  • Deployment of medical AI is reduced at both at the institutional and individual level by the conservative environment around healthcare, where innovation is slow-moving.
  • The technical nature of AI reduces the disruptiveness of new applications, as it uses already existing data.

Cur­rent devel­op­ments in the med­ic­al applic­a­tions of arti­fi­cial intel­li­gence (AI) have been described by many as a “revolu­tion in medi­cine”. Yet even though advances in Machine Learn­ing – and spe­cific­ally Deep Learn­ing – pro­foundly change how things are done, claims of a “revolu­tion” are undoubtedly misleading. 

Moreover, the idea, while it is appeal­ing, sug­gests that medi­cine is bound to adhere to a paradigm shift ori­ented solely around the avail­able tech­no­logy. Where­as the way AI changes medi­cine could be driv­en by a more col­lect­ive effort of those involved. If this were to be the case, the actu­al trans­form­a­tion would be an ongo­ing change for the dec­ades to come rather of an instant­an­eous shift. As such, uptake of AI for med­ic­al pur­poses is unlikely to be the “revolu­tion” we have been prom­ised. Here is why. 

#1 Changes in medical practise are slow

Pre­ci­sion medi­cine is a long-stand­ing effort to lever­age data for bet­ter treat­ments. Pion­eered by Karl Pear­son and Fran­cis Galton in the late 19th Cen­tury, they were the first to col­lect data with the expli­cit object­ives to stat­ist­ic­ally ana­lyse them. Since then, amongst many efforts, dur­ing the second half of the 20th Cen­tury, the Nation­al Insti­tute of Health (NIH) in the USA developed a wide range of stat­ist­ic­al meth­ods for pre­ci­sion medi­cine. The Human Gen­ome Pro­ject in the 2000, and the devel­op­ment of more advanced hard­ware and soft­ware to sup­port AI can there­fore be con­sidered as a con­tinu­ity of these efforts.

AI comes as a con­tinu­ity of pre­vi­ous tech­no­lo­gies, rather than an imme­di­ate revolution.

Of the many examples of AI in medi­cine, radi­ology is anoth­er com­pel­ling example. The dis­cip­line developed with the Nobel-prize dis­cov­ery of X‑rays by Rönt­gen in 1895. Radio­graphy was the only tech­nique for non-invas­ive med­ic­al ima­ging for almost 50 years. Over time, the gain of pre­ci­sion and ease of use made it one of the prime choices for dia­gnos­is in many cases. For instance, sur­geons used to dia­gnose appen­di­cit­is through touch alone, but now CT scans are the pre­ferred choice. As with for pre­ci­sion medi­cine, AI comes as a con­tinu­ity in those devel­op­ments as did the intro­duc­tion of pre­vi­ous tech­no­lo­gies before it – rather than an imme­di­ate “revolu­tion”. 

#2 Hospitals are resistant to change 

Radi­ology is one of – if not the – first med­ic­al dis­cip­lines in which the new gen­er­a­tion of AI tools is being com­mer­cial­ised. The first sci­entif­ic papers detail­ing proof-of-con­cepts using Deep Learn­ing on radio­graph­ies were pub­lished in the early 2010s. And now, a dec­ade later, the first tools are hit­ting the mar­ket. So, how did that hap­pen? In part, the tech­no­logy has matured but also changes were required on an admin­is­trat­ive level.

Even though some applic­a­tions of the tech­no­logy were developed five years ago, a span of time and invest­ment was needed not only to build the AI but to obtain cer­ti­fic­a­tion by reg­u­lat­ory insti­tu­tions to use it. Now, author­isa­tions are being delivered quick­er since both parties learned how to prove the valid­ity of AI applic­a­tions – some­times it is simply a mat­ter of sev­er­al months. Nev­er­the­less, buy­ers and users still need to check the util­ity of the tool in their work­ing con­text – i.e., patients’ needs and change of prac­tise across hospitals.

Fur­ther­more, hos­pit­als need to find funds to pay for AI tools; since they amount to new prac­tices, there is often no budget in place to pay for them. A hos­pit­al can take a year or more in admin­is­trat­ive pro­cesses before buy­ing AI and, although reg­u­lat­ory insti­tu­tions may have val­id­ated the safety of a product, there remain few cases where those devices are reim­bursed. The more nov­el, or the high­er the “revolu­tion­ary” poten­tial, the high­er the bar­ri­ers rise in the med­ic­al set­ting due to its con­ser­vat­ive cul­ture with regards to safety. The med­ic­al sys­tem pur­sues accur­acy, an ant­ag­on­ist to the uncer­tainty of innovation.

#3 Data requirements are time-consuming

Rol­lout of med­ic­al AI applic­a­tions is also lim­ited by inher­ent char­ac­ter­ist­ics of the tech­no­logy. To start, data must be pro­duced, and leg­al frame­works makes cre­at­ing your own data­set very com­plic­ated – not to men­tion the time and money required. In many cases, developers go for “sec­ond­ary use”. This term refers data not ori­gin­ally pro­duced for the pur­pose with which it is being used such as dia­gnos­is or admin­is­trat­ive paper­work – not spe­cific­ally for AI applic­a­tions. How­ever, this means that efforts are needed to clean the data, whilst still facing the many bar­ri­ers: GDPR, author­isa­tion accesses, mon­et­isa­tion, etc. 

In addi­tion, get­ting the data­set is only a mile­stone. Med­ic­al experts are needed to label data and help developers make sense of the res­ults. Many iter­a­tions between data treat­ment and mod­els are required before reach­ing a val­id res­ult. A rule of thumb estim­ates a proof of concept in AI is 80% data pre-treat­ment and 20% mod­el­ling. Add to that the proof-of- mile­stone stated above needed for reg­u­lat­ory bod­ies as well as users who must be con­vinced too.

Lastly, the scope of applic­a­tions works best when per­formed tasks are nar­row. The wider the scope, the more com­plic­ated and uncer­tain the devel­op­ment. For instance, cur­rent applic­a­tions in radi­ology are often lim­ited to detect­ing a spe­cif­ic area of interest in the body. Much of the time res­ults there are false pos­it­ives and com­plic­ated cases are sel­dom handled, such as breast-implants that often block mam­mo­graphy AI analysis.

#4 Uses of AI are still unclear

The early adop­tion of AI in radi­ology appears in a con­tinuum of new prac­tices. On one side, AI par­tially replaces the radi­olo­gist: some emer­gency ser­vices might use such tools to treat incom­ing patients when there is no radi­olo­gist on duty who could check res­ults at a later stage. On the oth­er, if AI serves for triage pur­poses it could be used as a second opin­ion to avoid a false neg­at­ive. The dif­fer­ence in use could determ­ine what is expec­ted of the tool – pre­ci­sion, recall and oth­er met­rics would be cal­ib­rated accordingly.

Related issues, like respons­ib­il­ity, are still not form­ally addressed.

Such ques­tions and related issues, like respons­ib­il­ity, are still not form­ally addressed. And how this pans out will ulti­mately affect AI devel­op­ment and use. Indeed, reli­ance on tools and auto­ma­tion is found to decrease expert­ise. If today seni­or radi­olo­gists can dis­tin­guish true from false in AI ana­lys­is, it is not the case for less-exper­i­enced ones. As young gen­er­a­tions con­tin­ue to rely on it, they might become more depend­ent and less crit­ic about their res­ults, which could be at the bene­fit of oth­er skills.

Revolution or transformation?

Think­ing there is a “revolu­tion” in which we can­not influ­ence the course of action is lim­it­ing. How AI applic­a­tions devel­op may have more to do with cur­rent con­straints on health­care ser­vices – lack of per­son­nel, fund­ing, and resources – rather than optim­al applic­a­tions for medi­cine in gen­er­al. Nev­er­the­less, com­mis­sions are formed1 at the nation­al and inter­na­tion­al level to reg­u­late such issues, and per­haps oth­er forms of col­lect­ive decision could also be actioned, like the “com­munit­ies of inquiry” developed by prag­mat­ists as the corner­stone of demo­crat­ic life2 .

The argu­ments provided above do not entirely dis­cred­it the pos­sib­il­ity that a dis­rupt­ive applic­a­tion for AI could appear that would “revolu­tion­ise” med­ic­al care. Yet, they do try to put the rate cur­rent devel­op­ments into con­text and inscribe them in the dec­ades-long pro­cess of the tra­di­tion­al innov­a­tion that hap­pens in health­care. And, more import­antly, how the trans­ition is not bound to evolve based solely on the tech­no­logy – the slow rate of uptake in those innov­a­tions allow room for a col­lect­ive action to how AI is used in healthcare.

1https://​eithealth​.eu/​o​p​p​o​r​t​u​n​i​t​y​/​c​a​l​l​-​f​o​r​-​a​p​p​l​i​c​a​t​i​o​n​s​-​e​x​t​e​r​n​a​l​-​a​d​v​i​s​o​r​y​-​g​r​o​u​p​-​t​o​-​t​h​e​-​e​u​r​o​p​e​a​n​-​t​a​s​k​f​o​r​c​e​-​f​o​r​-​h​a​r​m​o​n​i​s​e​d​-​e​v​a​l​u​a​t​i​o​n​-​o​f​-​d​i​g​i​t​a​l​-​m​e​d​i​c​a​l​-​d​e​v​i​c​e​s​-​dmds/
2Prag­mat­ism and Organ­iz­a­tion Stud­ies, 2018, Phil­ippe Lornio, chapter 6, pp.158–188.

Support accurate information rooted in the scientific method.

Donate