π Society π Science and technology
What does it mean to “trust science”?

How to make the future more rational

El Mahdi El Mhamdi, Assistant professor at École Polytechnique and Research scientist at Google
On June 23rd, 2021 |
5 min reading time
El Mahdi El Mhamdi
El Mahdi El Mhamdi
Assistant professor at École Polytechnique and Research scientist at Google
Key takeaways
  • There are two types of logical reasoning: deduction and induction. Deduction used to infer knowledge based on a known ‘rule’, but to first define the ‘rule’ requires the use of inductive reasoning – which is less well understood.
  • In the past, deduction has played an essential role for society, for example in the formation of democracy, which relies on the ability of citizens to make informed and considered decisions.
  • Today, however, the power of automated deduction in our daily lives poses a threat to this capacity, for example through the spread of 'fake news'.
  • With the recent development of automated induction, we must try to preserve our rational autonomy through the education of future generations about the use of logic and the scientific method in general.

Over a year into the pan­dem­ic, a lack of resources in sci­ence com­mu­ni­ca­tion, abuse of bad epis­temics and ques­tion­able glob­al gov­er­nance – such as vac­cine dis­tri­b­u­tion – are still result­ing in thou­sands of avoid­able deaths per day. Even in west­ern democ­ra­cies, politi­cians still strug­gle to under­stand the role of aerosol-based trans­mis­sion and, as such, vital improve­ments that could be eas­i­ly achieved by bet­ter ven­ti­la­tion. Mean­while, scep­ti­cism around vac­cines – whilst it may be get­ting bet­ter – remains as long-stand­ing col­lat­er­al dam­age caused by dis­or­der in the infor­ma­tion land­scape or info­dem­ic

Limits of rationality

Of all human traits, ratio­nal­i­ty is arguably one we cher­ish most because we con­sid­er it a defin­ing dis­tinc­tion between us and oth­er ani­mals. How­ev­er, this unfor­tu­nate­ly also comes with a recur­rent over­con­fi­dence in our abil­i­ty to be ratio­nal lead­ing us to trust our intu­ition, believe in our gut-feel­ing and lis­ten to com­mon-sense. All of which go against ratio­nal­i­ty. In addi­tion, nobody is born with ratio­nal­i­ty or inher­ent log­ic. Hence, it is up to soci­ety, through the accu­mu­la­tion of knowl­edge, to endow indi­vid­u­als with the abil­i­ty to think objec­tive­ly. As such, the future of humanity’s abil­i­ty to per­form col­lec­tive prob­lem solv­ing requires a scale-up of the num­ber of cit­i­zens well-equipped with exter­nal think­ing strate­gies that include log­ic and the sci­en­tif­ic method. 

Of all human char­ac­ter­is­tics, ratio­nal­i­ty is prob­a­bly the one we cher­ish most.

A pos­si­ble first step could be to repair the pop­ulist impres­sion that our tech­no­log­i­cal­ly dri­ven era is too advanced for ‘non-pro­duc­tive’ arm­chair phi­los­o­phy. After all, the mod­ern com­put­er era was not insti­gat­ed by engi­neers try­ing to build a gad­get, but rather a group of philoso­phers who were lit­er­al­ly think­ing about think­ing. It was the foun­da­tion­al cri­sis in log­ic in the late 19th Cen­tu­ry that led a series of philoso­phers and math­e­mati­cians to ques­tion the very act of “pro­cess­ing infor­ma­tion”. In doing so, they found use­ful loop­holes in log­ic and set the right ques­tions to which Kurt Gödel, Alan Tur­ing, Alon­zo Church and oth­ers would bring aware­ness – a pre­cur­sor to many things today, in the form of lap­tops and smart­phones1.

Deduction vs. induction

Anoth­er use­ful step might be to stress how log­ic and the sci­en­tif­ic method, while still an end­less work in progress, can be viewed through two of their most impor­tant com­po­nents: deduc­tion and induc­tion. Sim­ply put, deduc­tion is “top-down” log­ic, mean­ing how to infer a con­clu­sion from a gen­er­al prin­ci­ple or a law. This includes launch­ing a space rock­et, cur­ing a well-known dis­ease, or apply­ing a law in court. Where­as induc­tion is “bot­tom-up” log­ic, that is, how to infer – based on obser­va­tions – the laws to explain how these obser­va­tions hap­pen. This may be describ­ing the laws of grav­i­ty, dis­cov­er­ing the cure for a new dis­ease or defin­ing a law for soci­ety to abide by. All of which require an induc­tive mindset.

Deduc­tive log­ic was his­tor­i­cal­ly the first to be estab­lished through algo­rithms. While today this term is pri­mar­i­ly asso­ci­at­ed with tech­nol­o­gy, it should be stressed that it was orig­i­nal­ly derived from the name of a thinker, Al Khwaraz­i­mi. He was most­ly try­ing to help lawyers by writ­ing step-by-step rules that they could apply to reach com­pa­ra­ble results2. Far from being a tool to ren­der the deci­sion-mak­ing process obscure, algo­rithms, such as writ­ten law, were his­tor­i­cal­ly a tool for trans­paren­cy. We feel safer if we know we will be judged accord­ing to a well-defined rule or law, rather than accord­ing to the fluc­tu­at­ing mood of an autocrat.

Induc­tive process­es are hard­er than deduc­tive ones. Even though medieval thinkers such as Ibn Al Haytham (Alhazen), Jabir Ibn Hayan (Geber) and, of course, Galileo left ear­ly traces of progress in for­mal­is­ing the sci­en­tif­ic method used today, we still do not have a wide­ly adopt­ed algo­rithm for induc­tion as we do have for deduc­tion. Impor­tant attempts to pro­vide algo­rithms for induc­tion were made by Bayes and Laplace3. The lat­ter even pro­duced an impor­tant, yet high­ly over­looked “Philo­soph­i­cal Essay on Prob­a­bil­i­ties”, decades after for­mal­is­ing the laws of prob­a­bil­i­ties (in the form of a course giv­en at the then nascent Ecole Nor­male and Ecole poly­tech­nique). Read­ing Laplace’s essay today, one finds pio­neer­ing ideas about what can go wrong with induc­tion – some­thing mod­ern cog­ni­tive psy­chol­o­gists refer to as cog­ni­tive bias­es.

The problem with deduction

Once we look at the details, many cog­ni­tive bias­es fall under an exces­sive use of a deduc­tive mind­set in sit­u­a­tions where an induc­tive one is more appro­pri­ate. The most com­mon of which being con­fir­ma­tion bias: our brain would rather seek facts that prove the hypoth­e­sis it already has than to expend men­tal effort to go against it. There is also the oth­er (less com­mon) extreme, exces­sive rel­a­tivism, where we refuse any causal inter­pre­ta­tion, even when data jus­ti­fies an expla­na­tion more appro­pri­ate­ly than exist­ing alternatives. 

To com­pen­sate for the weak­ness­es of the human mind, sci­en­tists devised heuris­tics to bet­ter per­form induc­tion: test­ing a hypoth­e­sis, con­trolled exper­i­ments, ran­domised tri­als, mod­ern sta­tis­tics and so on. Bayes and Laplace went even fur­ther and gave us an algo­rithm to per­form induc­tion – the Bayes’ equa­tion. It can be used to show that first order log­ic, where state­ments are either true or false, is a spe­cial case of the laws of prob­a­bil­i­ty, where use­ful room is left for uncer­tain­ty. Whilst the lan­guage of deduc­tion is most­ly answer­ing with a pre-defined “because” to ques­tions start­ing with “why”, rig­or­ous induc­tion requires a more prob­a­bilis­tic analy­sis that adds a “how much” to weigh every dif­fer­ent pos­si­ble cause. 

To com­pen­sate for the weak­ness­es of the human mind and to make bet­ter use of induc­tion, sci­en­tists have devel­oped heuristics.

Philoso­pher Daniel Den­nett4 describes some of our great­est sci­en­tif­ic and philo­soph­i­cal rev­o­lu­tions as “strange inver­sions of rea­son­ing”. Dar­win was able to invert the log­ic that com­plex beings (i.e., humans) did not nec­es­sar­i­ly need a more com­plex cre­ator to emerge. Tur­ing showed that com­plex infor­ma­tion pro­cess­ing does not need the agent (i.e., the com­put­er) per­form­ing it to be aware of any­thing oth­er than sim­ple mechan­i­cal log­i­cal instruc­tions. I would like to argue that what Den­nett calls strange inver­sions of rea­son­ing, are his­tor­i­cal moves from a deduc­tive (and some­what cre­ation­ist) frame­work to an induc­tive frame­work. The more com­plex the prob­lem, the less a “why” is use­ful and the more a “how much” is needed. 

Induction as a societal tool

While sci­en­tists were busy devis­ing log­ic and the sci­en­tif­ic method for the past mil­len­nia, the larg­er part of soci­ety realised the lim­its of the deduc­tive mind­set that comes with either autoc­ra­cy, where a monarch sets the rule, or theoc­ra­cy, where God, often a com­fort­able shield for the monarch, sets the rule. This led to the pro­gres­sive devel­op­ment of democ­ra­cy, where the aggre­ga­tion of opin­ions helps soci­ety per­form a bet­ter and more robust col­lec­tive induc­tion and, in prin­ci­ple, estab­lish more effec­tive rules. Yet, democ­ra­cy lies in the hope that a sig­nif­i­cant frac­tion of soci­ety is well-informed and act­ing in its own interest. 

Today, this assump­tion is at greater threat than it has ever been before. For the first time in human his­to­ry, we are pro­duc­ing infor­ma­tion dis­sem­i­na­tion tools that have the broad­cast pow­er of the most dystopi­an pro­pa­gan­da machine yet the fine-grained per­son­al­i­sa­tion fea­tures of indi­vid­ual door-to-door cam­paign­ing – for the bet­ter or for worse. The dig­i­tal tools we enjoy today are most­ly the out­come of automat­ing deduc­tion (pro­gram­ming), which most­ly hap­pened dur­ing the past cen­tu­ry. As we are enter­ing a new phase of automa­tion, which is this time data-dri­ven, it is impor­tant to stress that, beyond the gad­gets and the tech­nol­o­gy part, we are try­ing to auto­mate induc­tion, and, while doing so, bet­ter under­stand what induc­tion is and how to do it right. 

Keep­ing that in mind in how we design our cours­es on Data Sci­ence or com­mu­ni­cate the advances of arti­fi­cial intel­li­gence to the pub­lic, might hope­ful­ly help pro­duce a new gen­er­a­tion of cit­i­zens that are not only able to build or use these tools, but able to join the larg­er con­ver­sa­tion on the future of rea­son­ing. A con­ver­sa­tion in which induc­tion, deduc­tion, society’s diet of infor­ma­tion and appro­pri­ate col­lec­tive deci­sion-mak­ing are empow­ered, and not cor­rupt­ed by the very dig­i­tal tools that were invent­ed as mere side prod­ucts of the human endeav­our. Our endeav­our to under­stand and auto­mate what we cher­ish the most: our abil­i­ty to think.

1It is rec­om­mend­ed to watch logi­cian Moshe Vardi’s lec­ture “from Aris­to­tle to the iPhone” (Giv­en at the Israel Insti­tute for Advanced Stud­ies in 2016, many lat­er ver­sions exist online).
2It should also be stressed that Alkhwarizmi’s book is writ­ten in Ara­bic, where Com­pu­ta­tion and Judg­ment are some­times referred to using the same term: Hissab. (The Day of Judg­ment, Yawm Al Hissab, in the Quran­ic tra­di­tion, lit­er­al­ly means “the day of com­pu­ta­tion”).
3The Equa­tion of Knowl­edge: From Bayes’ Rule to a Uni­fied Phi­los­o­phy of Sci­ence, Lê Nguyên Hoang. Chap­man and Hall, CRC, 2020.
4Which Den­nett bor­rowed from Robert MacKen­zie Beverley’s cri­tique of Darwin’s “On the ori­gin of species”, turn­ing the cri­tique into an actu­al sup­port­ing state­ment.


El Mahdi El Mhamdi

El Mahdi El Mhamdi

Assistant professor at École Polytechnique and Research scientist at Google

El Mahdi El Mhamdi’s research is motivated by the understanding of robust information processing in nature, machines and society, with a focal line of research on the mathematics of collective information processing and distributed learning. He is the co-author of the upcoming book “The Fabulous Endeavor: Robustly Beneficial Information” on the scientific and social challenges of large-scale information processing, already available in French under “Le Fabuleux Chantier” (EDP Sciences, November 2019)

Our world explained with science. Every week, in your inbox.

Get the newsletter