1_decisionAlgorithme
π Digital π Science and technology
What are the next challenges for AI?

“Decisions made by algorithms must be justified”

Sophy Caulier, Independant journalist
On December 1st, 2021 |
4 min reading time
Isabelle Bloch
Isabelle Bloch
Professor at Sorbonne University (Chair in Artificial Intelligence)
Key takeaways
  • Symbolic AI is based on certain rules, which reproduce human reasoning. This approach is said to be “inherently explainable”, with a few exceptions.
  • Statistical approaches to AI rely on statistical learning methods; it is difficult to extract and express the rules of what its neural networks do.
  • The need for explainability comes from different issues around trust, ethics, responsibility, and also possibly economic issues.
  • Hybrid AI can address this problem by combining several AI approaches. It combines knowledge and data, symbolic AI and neural networks, logic, and learning.
  • But, whatever the approach, the role of the human being remains essential, and it will always be necessary to justify the decisions made by an algorithm.

How and why should we explain the deci­sions made by arti­fi­cial intel­li­gence (AI) algorithms?

The need for explain­abil­i­ty is not new! The ques­tion was already being asked as far back as ancient times, even if back then it was from a philo­soph­i­cal point of view. It was lat­er posed in a for­mal way at the end of the 19th cen­tu­ry, par­tic­u­lar­ly since the work of Charles Peirce. This Amer­i­can philoso­pher and the­o­rist intro­duced abduc­tive rea­son­ing, i.e. the search for expla­na­tions. Many of the meth­ods used in sym­bol­ic AI, which are based on knowl­edge mod­el­ling with approach­es such as log­ic, sym­bol­ic learn­ing, etc., are said to be ‘inher­ent­ly explain­able’, because the sequence of rea­son­ing that leads to a deci­sion is iden­ti­fied. But this is only par­tial­ly true, because if the prob­lem posed becomes too large, with a large num­ber of log­i­cal for­mu­las, very com­plex deci­sion trees, and very numer­ous asso­ci­a­tion rules, expla­na­tion becomes difficult.

The ques­tion of explain­abil­i­ty is all the more rel­e­vant today as the sec­ond par­a­digm of AI, sta­tis­ti­cal approach­es to AI, has been at the fore­front in recent years. While sym­bol­ic AI is based on rules and repro­duces human rea­son­ing, sta­tis­ti­cal approach­es to AI rely on sta­tis­ti­cal learn­ing meth­ods, in par­tic­u­lar arti­fi­cial neur­al net­works that are trained on large vol­umes of data. These approach­es are part of what is known as machine learn­ing (ML), includ­ing deep learn­ing (DL). It is very dif­fi­cult to extract and express the rules of what neur­al net­works do, which begin with the data.

How can an AI deci­sion be explained?

First of all, it is nec­es­sary to define what to explain, for whom, how and why. The choice of explain­abil­i­ty tools or meth­ods depends on the answer to these ques­tions. For neur­al net­works, it is pos­si­ble to answer them at the lev­el of the data used, at the lev­el of the oper­a­tion of the net­work itself or at the lev­el of the result pro­duced. For the oper­a­tion, one may ask whether it is nec­es­sary to explain. Take aspirin for exam­ple, for a long time it was used with­out any­one know­ing how it worked. When it the way it worked was final­ly under­stood, it was used to devel­op new things, with­out chang­ing the way aspirin itself was used. In the same way, you can dri­ve a car with­out under­stand­ing the engine but with a lev­el of knowl­edge that is suf­fi­cient to use a car well.

At the lev­el of the result, the expla­na­tion may need to go through inter­me­di­ate steps to explain the final result. For exam­ple, I work with radi­ol­o­gists on mea­sur­ing the thick­ness of the cor­pus cal­lo­sum in pre­ma­ture babies. The radi­ol­o­gists want­ed to know where the results came from, which region was recog­nised in the image, where the mea­sure­ments were made, to under­stand what con­tributed to the deci­sion and explain the final result. These steps were nec­es­sary for them to have con­fi­dence in the tool.

An algo­rithm is expect­ed to be neu­tral, but noth­ing is ever neu­tral! The doc­tor trig­gers an imag­ing test for his patient because he is look­ing for some­thing that he can iden­ti­fy in this image, he has an inten­tion. This intro­duces bias­es, which are not sta­tis­ti­cal, but cog­ni­tive, of fram­ing, con­fir­ma­tion, com­pla­cen­cy, etc. These same bias­es are found in the face of images that have been tak­en. We are faced with these same bias­es when it comes to the results pro­duced by an algo­rithm. Fur­ther­more, we should not for­get that we trust the algo­rithm more if it shows us what we are look­ing for. Anoth­er fac­tor to con­sid­er is the cost of an error, which can be very dif­fer­ent depend­ing on whether or not any­thing has been detect­ed. Radi­ol­o­gists gen­er­al­ly pre­fer to have a high­er num­ber of false pos­i­tives (since oth­er exam­i­na­tions will always con­firm or inval­i­date what has been detect­ed) than false neg­a­tives. It is when the algo­rithm does not detect any­thing that it must not be mis­tak­en, even if the doc­tors always ver­i­fy the results visually.

Explain­abil­i­ty there­fore varies accord­ing to the user and how an algo­rithm is used?

Expla­na­tion is a process of con­ver­sa­tion, of com­mu­ni­ca­tion. We adapt the lev­el of expla­na­tion accord­ing to the per­son we are talk­ing to. To stay with­in the med­ical frame­work, let’s take the exam­ple of an image show­ing a tumour. The doc­tor will explain this image and the tumour dif­fer­ent­ly depend­ing on whether he is talk­ing to his staff, to stu­dents, to an audi­ence in a con­fer­ence or to his patient. This is why doc­tors do not want the results from algo­rithms to be made part of the patient’s records before hav­ing had a chance to check them themselves.

We also need to ask our­selves why we want to explain. Is it to jus­ti­fy, to con­trol the func­tion­ing of an algo­rithm, to dis­cov­er sci­en­tif­ic knowl­edge, a phe­nom­e­non? The objec­tives vary and will there­fore require dif­fer­ent tools. The stakes also dif­fer, there are issues of trust, ethics, respon­si­bil­i­ty, and pos­si­bly eco­nom­ic issues.

Why is the need for explic­a­bil­i­ty stronger at the moment?

This is main­ly due to the increas­ing use of deep neur­al net­works, which have mil­lions of para­me­ters and are extreme­ly com­plex. There is a lot of reliance on data in the hope that increas­ing the vol­umes used will help improve the results. This being said, there is a lot of domain knowl­edge that could be used. This is what hybrid AI pro­pos­es to do, com­bin­ing sev­er­al approach­es to AI. It com­bines knowl­edge and data, sym­bol­ic AI and neur­al net­works, log­ic and learn­ing. Per­son­al­ly, I’m a big believ­er in this. But what­ev­er the approach, the role of the human being remains para­mount, and the deci­sions made by an algo­rithm will always have to be justified.

Our world explained with science. Every week, in your inbox.

Get the newsletter