π Health and biotech π Science and technology
Digital innovations for better health

Why the medical AI revolution could never happen

Joël Perez Torrents, PhD student at I³-CRG* at École Polytechnique (IP Paris)
On November 16th, 2022 |
5 min reading time
Joël Perez Torrents
PhD student at I³-CRG* at École Polytechnique (IP Paris)
Key takeaways
  • Artificial intelligence (AI) for medical applications has the potential to profoundly change healthcare practices in the long-term: diagnostics, treatment, and patient experience.
  • But, until now, most developments in AI have followed a continuity of medical efforts rather than completely overturning pre-existing methods.
  • Deployment of medical AI is reduced at both at the institutional and individual level by the conservative environment around healthcare, where innovation is slow-moving.
  • The technical nature of AI reduces the disruptiveness of new applications, as it uses already existing data.

Cur­rent devel­op­ments in the med­ical appli­ca­tions of arti­fi­cial intel­li­gence (AI) have been described by many as a “rev­o­lu­tion in med­i­cine”. Yet even though advances in Machine Learn­ing – and specif­i­cal­ly Deep Learn­ing – pro­found­ly change how things are done, claims of a “rev­o­lu­tion” are undoubt­ed­ly misleading. 

More­over, the idea, while it is appeal­ing, sug­gests that med­i­cine is bound to adhere to a par­a­digm shift ori­ent­ed sole­ly around the avail­able tech­nol­o­gy. Where­as the way AI changes med­i­cine could be dri­ven by a more col­lec­tive effort of those involved. If this were to be the case, the actu­al trans­for­ma­tion would be an ongo­ing change for the decades to come rather of an instan­ta­neous shift. As such, uptake of AI for med­ical pur­pos­es is unlike­ly to be the “rev­o­lu­tion” we have been promised. Here is why. 

#1 Changes in medical practise are slow

Pre­ci­sion med­i­cine is a long-stand­ing effort to lever­age data for bet­ter treat­ments. Pio­neered by Karl Pear­son and Fran­cis Gal­ton in the late 19th Cen­tu­ry, they were the first to col­lect data with the explic­it objec­tives to sta­tis­ti­cal­ly analyse them. Since then, amongst many efforts, dur­ing the sec­ond half of the 20th Cen­tu­ry, the Nation­al Insti­tute of Health (NIH) in the USA devel­oped a wide range of sta­tis­ti­cal meth­ods for pre­ci­sion med­i­cine. The Human Genome Project in the 2000, and the devel­op­ment of more advanced hard­ware and soft­ware to sup­port AI can there­fore be con­sid­ered as a con­ti­nu­ity of these efforts.

AI comes as a con­ti­nu­ity of pre­vi­ous tech­nolo­gies, rather than an imme­di­ate revolution.

Of the many exam­ples of AI in med­i­cine, radi­ol­o­gy is anoth­er com­pelling exam­ple. The dis­ci­pline devel­oped with the Nobel-prize dis­cov­ery of X‑rays by Rönt­gen in 1895. Radi­og­ra­phy was the only tech­nique for non-inva­sive med­ical imag­ing for almost 50 years. Over time, the gain of pre­ci­sion and ease of use made it one of the prime choic­es for diag­no­sis in many cas­es. For instance, sur­geons used to diag­nose appen­dici­tis through touch alone, but now CT scans are the pre­ferred choice. As with for pre­ci­sion med­i­cine, AI comes as a con­ti­nu­ity in those devel­op­ments as did the intro­duc­tion of pre­vi­ous tech­nolo­gies before it – rather than an imme­di­ate “rev­o­lu­tion”. 

#2 Hospitals are resistant to change 

Radi­ol­o­gy is one of – if not the – first med­ical dis­ci­plines in which the new gen­er­a­tion of AI tools is being com­mer­cialised. The first sci­en­tif­ic papers detail­ing proof-of-con­cepts using Deep Learn­ing on radi­ogra­phies were pub­lished in the ear­ly 2010s. And now, a decade lat­er, the first tools are hit­ting the mar­ket. So, how did that hap­pen? In part, the tech­nol­o­gy has matured but also changes were required on an admin­is­tra­tive level.

Even though some appli­ca­tions of the tech­nol­o­gy were devel­oped five years ago, a span of time and invest­ment was need­ed not only to build the AI but to obtain cer­ti­fi­ca­tion by reg­u­la­to­ry insti­tu­tions to use it. Now, autho­ri­sa­tions are being deliv­ered quick­er since both par­ties learned how to prove the valid­i­ty of AI appli­ca­tions – some­times it is sim­ply a mat­ter of sev­er­al months. Nev­er­the­less, buy­ers and users still need to check the util­i­ty of the tool in their work­ing con­text – i.e., patients’ needs and change of prac­tise across hospitals.

Fur­ther­more, hos­pi­tals need to find funds to pay for AI tools; since they amount to new prac­tices, there is often no bud­get in place to pay for them. A hos­pi­tal can take a year or more in admin­is­tra­tive process­es before buy­ing AI and, although reg­u­la­to­ry insti­tu­tions may have val­i­dat­ed the safe­ty of a prod­uct, there remain few cas­es where those devices are reim­bursed. The more nov­el, or the high­er the “rev­o­lu­tion­ary” poten­tial, the high­er the bar­ri­ers rise in the med­ical set­ting due to its con­ser­v­a­tive cul­ture with regards to safe­ty. The med­ical sys­tem pur­sues accu­ra­cy, an antag­o­nist to the uncer­tain­ty of innovation.

#3 Data requirements are time-consuming

Roll­out of med­ical AI appli­ca­tions is also lim­it­ed by inher­ent char­ac­ter­is­tics of the tech­nol­o­gy. To start, data must be pro­duced, and legal frame­works makes cre­at­ing your own dataset very com­pli­cat­ed – not to men­tion the time and mon­ey required. In many cas­es, devel­op­ers go for “sec­ondary use”. This term refers data not orig­i­nal­ly pro­duced for the pur­pose with which it is being used such as diag­no­sis or admin­is­tra­tive paper­work – not specif­i­cal­ly for AI appli­ca­tions. How­ev­er, this means that efforts are need­ed to clean the data, whilst still fac­ing the many bar­ri­ers: GDPR, autho­ri­sa­tion access­es, mon­eti­sa­tion, etc. 

In addi­tion, get­ting the dataset is only a mile­stone. Med­ical experts are need­ed to label data and help devel­op­ers make sense of the results. Many iter­a­tions between data treat­ment and mod­els are required before reach­ing a valid result. A rule of thumb esti­mates a proof of con­cept in AI is 80% data pre-treat­ment and 20% mod­el­ling. Add to that the proof-of- mile­stone stat­ed above need­ed for reg­u­la­to­ry bod­ies as well as users who must be con­vinced too.

Last­ly, the scope of appli­ca­tions works best when per­formed tasks are nar­row. The wider the scope, the more com­pli­cat­ed and uncer­tain the devel­op­ment. For instance, cur­rent appli­ca­tions in radi­ol­o­gy are often lim­it­ed to detect­ing a spe­cif­ic area of inter­est in the body. Much of the time results there are false pos­i­tives and com­pli­cat­ed cas­es are sel­dom han­dled, such as breast-implants that often block mam­mog­ra­phy AI analysis.

#4 Uses of AI are still unclear

The ear­ly adop­tion of AI in radi­ol­o­gy appears in a con­tin­u­um of new prac­tices. On one side, AI par­tial­ly replaces the radi­ol­o­gist: some emer­gency ser­vices might use such tools to treat incom­ing patients when there is no radi­ol­o­gist on duty who could check results at a lat­er stage. On the oth­er, if AI serves for triage pur­pos­es it could be used as a sec­ond opin­ion to avoid a false neg­a­tive. The dif­fer­ence in use could deter­mine what is expect­ed of the tool – pre­ci­sion, recall and oth­er met­rics would be cal­i­brat­ed accordingly.

Relat­ed issues, like respon­si­bil­i­ty, are still not for­mal­ly addressed.

Such ques­tions and relat­ed issues, like respon­si­bil­i­ty, are still not for­mal­ly addressed. And how this pans out will ulti­mate­ly affect AI devel­op­ment and use. Indeed, reliance on tools and automa­tion is found to decrease exper­tise. If today senior radi­ol­o­gists can dis­tin­guish true from false in AI analy­sis, it is not the case for less-expe­ri­enced ones. As young gen­er­a­tions con­tin­ue to rely on it, they might become more depen­dent and less crit­ic about their results, which could be at the ben­e­fit of oth­er skills.

Revolution or transformation?

Think­ing there is a “rev­o­lu­tion” in which we can­not influ­ence the course of action is lim­it­ing. How AI appli­ca­tions devel­op may have more to do with cur­rent con­straints on health­care ser­vices – lack of per­son­nel, fund­ing, and resources – rather than opti­mal appli­ca­tions for med­i­cine in gen­er­al. Nev­er­the­less, com­mis­sions are formed1 at the nation­al and inter­na­tion­al lev­el to reg­u­late such issues, and per­haps oth­er forms of col­lec­tive deci­sion could also be actioned, like the “com­mu­ni­ties of inquiry” devel­oped by prag­ma­tists as the cor­ner­stone of demo­c­ra­t­ic life2 .

The argu­ments pro­vid­ed above do not entire­ly dis­cred­it the pos­si­bil­i­ty that a dis­rup­tive appli­ca­tion for AI could appear that would “rev­o­lu­tionise” med­ical care. Yet, they do try to put the rate cur­rent devel­op­ments into con­text and inscribe them in the decades-long process of the tra­di­tion­al inno­va­tion that hap­pens in health­care. And, more impor­tant­ly, how the tran­si­tion is not bound to evolve based sole­ly on the tech­nol­o­gy – the slow rate of uptake in those inno­va­tions allow room for a col­lec­tive action to how AI is used in healthcare.

2Prag­ma­tism and Orga­ni­za­tion Stud­ies, 2018, Philippe Lornio, chap­ter 6, pp.158–188.

Our world explained with science. Every week, in your inbox.

Get the newsletter