sienceEtDefiance_asRationalAsWeThink
π Society π Science and technology
What does it mean to “trust science”?

How to make the future more rational

par El Mahdi El Mhamdi, Assistant professor at École Polytechnique and Research scientist at Google
On June 23rd, 2021 |
5min reading time
El Mahdi El Mhamdi
El Mahdi El Mhamdi
Assistant professor at École Polytechnique and Research scientist at Google
Key takeaways
  • There are two types of logical reasoning: deduction and induction. Deduction used to infer knowledge based on a known ‘rule’, but to first define the ‘rule’ requires the use of inductive reasoning – which is less well understood.
  • In the past, deduction has played an essential role for society, for example in the formation of democracy, which relies on the ability of citizens to make informed and considered decisions.
  • Today, however, the power of automated deduction in our daily lives poses a threat to this capacity, for example through the spread of 'fake news'.
  • With the recent development of automated induction, we must try to preserve our rational autonomy through the education of future generations about the use of logic and the scientific method in general.

Over a year into the pan­dem­ic, a lack of resources in sci­ence com­mu­nic­a­tion, abuse of bad epi­stem­ics and ques­tion­able glob­al gov­ernance – such as vac­cine dis­tri­bu­tion – are still res­ult­ing in thou­sands of avoid­able deaths per day. Even in west­ern demo­cra­cies, politi­cians still struggle to under­stand the role of aer­o­sol-based trans­mis­sion and, as such, vital improve­ments that could be eas­ily achieved by bet­ter vent­il­a­tion. Mean­while, scep­ti­cism around vac­cines – whilst it may be get­ting bet­ter – remains as long-stand­ing col­lat­er­al dam­age caused by dis­order in the inform­a­tion land­scape or infodem­ic

Limits of rationality

Of all human traits, ration­al­ity is argu­ably one we cher­ish most because we con­sider it a defin­ing dis­tinc­tion between us and oth­er anim­als. How­ever, this unfor­tu­nately also comes with a recur­rent over­con­fid­ence in our abil­ity to be ration­al lead­ing us to trust our intu­ition, believe in our gut-feel­ing and listen to com­mon-sense. All of which go against ration­al­ity. In addi­tion, nobody is born with ration­al­ity or inher­ent logic. Hence, it is up to soci­ety, through the accu­mu­la­tion of know­ledge, to endow indi­vidu­als with the abil­ity to think object­ively. As such, the future of humanity’s abil­ity to per­form col­lect­ive prob­lem solv­ing requires a scale-up of the num­ber of cit­izens well-equipped with extern­al think­ing strategies that include logic and the sci­entif­ic method. 

Of all human char­ac­ter­ist­ics, ration­al­ity is prob­ably the one we cher­ish most.

A pos­sible first step could be to repair the pop­u­list impres­sion that our tech­no­lo­gic­ally driv­en era is too advanced for ‘non-pro­duct­ive’ arm­chair philo­sophy. After all, the mod­ern com­puter era was not instig­ated by engin­eers try­ing to build a gad­get, but rather a group of philo­soph­ers who were lit­er­ally think­ing about think­ing. It was the found­a­tion­al crisis in logic in the late 19th Cen­tury that led a series of philo­soph­ers and math­em­aticians to ques­tion the very act of “pro­cessing inform­a­tion”. In doing so, they found use­ful loop­holes in logic and set the right ques­tions to which Kurt Gödel, Alan Tur­ing, Alonzo Church and oth­ers would bring aware­ness – a pre­curs­or to many things today, in the form of laptops and smart­phones1.

Deduction vs. induction

Anoth­er use­ful step might be to stress how logic and the sci­entif­ic meth­od, while still an end­less work in pro­gress, can be viewed through two of their most import­ant com­pon­ents: deduc­tion and induc­tion. Simply put, deduc­tion is “top-down” logic, mean­ing how to infer a con­clu­sion from a gen­er­al prin­ciple or a law. This includes launch­ing a space rock­et, cur­ing a well-known dis­ease, or apply­ing a law in court. Where­as induc­tion is “bot­tom-up” logic, that is, how to infer – based on obser­va­tions – the laws to explain how these obser­va­tions hap­pen. This may be describ­ing the laws of grav­ity, dis­cov­er­ing the cure for a new dis­ease or defin­ing a law for soci­ety to abide by. All of which require an induct­ive mindset.

Deduct­ive logic was his­tor­ic­ally the first to be estab­lished through algorithms. While today this term is primar­ily asso­ci­ated with tech­no­logy, it should be stressed that it was ori­gin­ally derived from the name of a thinker, Al Khwarazi­mi. He was mostly try­ing to help law­yers by writ­ing step-by-step rules that they could apply to reach com­par­able res­ults2. Far from being a tool to render the decision-mak­ing pro­cess obscure, algorithms, such as writ­ten law, were his­tor­ic­ally a tool for trans­par­ency. We feel safer if we know we will be judged accord­ing to a well-defined rule or law, rather than accord­ing to the fluc­tu­at­ing mood of an autocrat.

Induct­ive pro­cesses are harder than deduct­ive ones. Even though medi­ev­al thinkers such as Ibn Al Haytham (Alhazen), Jabir Ibn Hay­an (Geber) and, of course, Galileo left early traces of pro­gress in form­al­ising the sci­entif­ic meth­od used today, we still do not have a widely adop­ted algorithm for induc­tion as we do have for deduc­tion. Import­ant attempts to provide algorithms for induc­tion were made by Bayes and Laplace3. The lat­ter even pro­duced an import­ant, yet highly over­looked “Philo­soph­ic­al Essay on Prob­ab­il­it­ies”, dec­ades after form­al­ising the laws of prob­ab­il­it­ies (in the form of a course giv­en at the then nas­cent Ecole Nor­male and Ecole poly­tech­nique). Read­ing Laplace’s essay today, one finds pion­eer­ing ideas about what can go wrong with induc­tion – some­thing mod­ern cog­nit­ive psy­cho­lo­gists refer to as cog­nit­ive biases.

The problem with deduction

Once we look at the details, many cog­nit­ive biases fall under an excess­ive use of a deduct­ive mind­set in situ­ations where an induct­ive one is more appro­pri­ate. The most com­mon of which being con­firm­a­tion bias: our brain would rather seek facts that prove the hypo­thes­is it already has than to expend men­tal effort to go against it. There is also the oth­er (less com­mon) extreme, excess­ive relativ­ism, where we refuse any caus­al inter­pret­a­tion, even when data jus­ti­fies an explan­a­tion more appro­pri­ately than exist­ing alternatives. 

To com­pensate for the weak­nesses of the human mind, sci­ent­ists devised heur­ist­ics to bet­ter per­form induc­tion: test­ing a hypo­thes­is, con­trolled exper­i­ments, ran­dom­ised tri­als, mod­ern stat­ist­ics and so on. Bayes and Laplace went even fur­ther and gave us an algorithm to per­form induc­tion – the Bayes’ equa­tion. It can be used to show that first order logic, where state­ments are either true or false, is a spe­cial case of the laws of prob­ab­il­ity, where use­ful room is left for uncer­tainty. Whilst the lan­guage of deduc­tion is mostly answer­ing with a pre-defined “because” to ques­tions start­ing with “why”, rig­or­ous induc­tion requires a more prob­ab­il­ist­ic ana­lys­is that adds a “how much” to weigh every dif­fer­ent pos­sible cause. 

To com­pensate for the weak­nesses of the human mind and to make bet­ter use of induc­tion, sci­ent­ists have developed heuristics.

Philo­soph­er Daniel Den­nett4 describes some of our greatest sci­entif­ic and philo­soph­ic­al revolu­tions as “strange inver­sions of reas­on­ing”. Dar­win was able to invert the logic that com­plex beings (i.e., humans) did not neces­sar­ily need a more com­plex cre­at­or to emerge. Tur­ing showed that com­plex inform­a­tion pro­cessing does not need the agent (i.e., the com­puter) per­form­ing it to be aware of any­thing oth­er than simple mech­an­ic­al logic­al instruc­tions. I would like to argue that what Den­nett calls strange inver­sions of reas­on­ing, are his­tor­ic­al moves from a deduct­ive (and some­what cre­ation­ist) frame­work to an induct­ive frame­work. The more com­plex the prob­lem, the less a “why” is use­ful and the more a “how much” is needed. 

Induction as a societal tool

While sci­ent­ists were busy devis­ing logic and the sci­entif­ic meth­od for the past mil­len­nia, the lar­ger part of soci­ety real­ised the lim­its of the deduct­ive mind­set that comes with either auto­cracy, where a mon­arch sets the rule, or theo­cracy, where God, often a com­fort­able shield for the mon­arch, sets the rule. This led to the pro­gress­ive devel­op­ment of demo­cracy, where the aggreg­a­tion of opin­ions helps soci­ety per­form a bet­ter and more robust col­lect­ive induc­tion and, in prin­ciple, estab­lish more effect­ive rules. Yet, demo­cracy lies in the hope that a sig­ni­fic­ant frac­tion of soci­ety is well-informed and act­ing in its own interest. 

Today, this assump­tion is at great­er threat than it has ever been before. For the first time in human his­tory, we are pro­du­cing inform­a­tion dis­sem­in­a­tion tools that have the broad­cast power of the most dysto­pi­an pro­pa­ganda machine yet the fine-grained per­son­al­isa­tion fea­tures of indi­vidu­al door-to-door cam­paign­ing – for the bet­ter or for worse. The digit­al tools we enjoy today are mostly the out­come of auto­mat­ing deduc­tion (pro­gram­ming), which mostly happened dur­ing the past cen­tury. As we are enter­ing a new phase of auto­ma­tion, which is this time data-driv­en, it is import­ant to stress that, bey­ond the gad­gets and the tech­no­logy part, we are try­ing to auto­mate induc­tion, and, while doing so, bet­ter under­stand what induc­tion is and how to do it right. 

Keep­ing that in mind in how we design our courses on Data Sci­ence or com­mu­nic­ate the advances of arti­fi­cial intel­li­gence to the pub­lic, might hope­fully help pro­duce a new gen­er­a­tion of cit­izens that are not only able to build or use these tools, but able to join the lar­ger con­ver­sa­tion on the future of reas­on­ing. A con­ver­sa­tion in which induc­tion, deduc­tion, society’s diet of inform­a­tion and appro­pri­ate col­lect­ive decision-mak­ing are empowered, and not cor­rup­ted by the very digit­al tools that were inven­ted as mere side products of the human endeav­our. Our endeav­our to under­stand and auto­mate what we cher­ish the most: our abil­ity to think.

1It is recom­men­ded to watch logi­cian Moshe Vardi’s lec­ture “from Aris­totle to the iPhone” (Giv­en at the Israel Insti­tute for Advanced Stud­ies in 2016, many later ver­sions exist online).
2It should also be stressed that Alkhwarizmi’s book is writ­ten in Arab­ic, where Com­pu­ta­tion and Judg­ment are some­times referred to using the same term: Hissab. (The Day of Judg­ment, Yawm Al Hissab, in the Qur­an­ic tra­di­tion, lit­er­ally means “the day of com­pu­ta­tion”).
3The Equa­tion of Know­ledge: From Bayes’ Rule to a Uni­fied Philo­sophy of Sci­ence, Lê Nguyên Hoang. Chap­man and Hall, CRC, 2020.
4Which Den­nett bor­rowed from Robert MacK­en­zie Beverley’s cri­tique of Darwin’s “On the ori­gin of spe­cies”, turn­ing the cri­tique into an actu­al sup­port­ing state­ment.

Contributors

El Mahdi El Mhamdi

El Mahdi El Mhamdi

Assistant professor at École Polytechnique and Research scientist at Google

El Mahdi El Mhamdi’s research is motivated by the understanding of robust information processing in nature, machines and society, with a focal line of research on the mathematics of collective information processing and distributed learning. He is the co-author of the upcoming book “The Fabulous Endeavor: Robustly Beneficial Information” on the scientific and social challenges of large-scale information processing, already available in French under “Le Fabuleux Chantier” (EDP Sciences, November 2019)

Support accurate information rooted in the scientific method.

Donate