Megaphone Symbolizing Marketing And Sales. Сoncept Networking Events, Sales Strategies, Branding And Advertising, Lead Generation, Sales Funnel Optimization
π Digital π Society
Social media: a new paradigm for public opinion

Are online recommendation algorithms polarising users’ views?

Giordano De Marzo , Researcher in the Physics Department at Sapienza University
On January 24th, 2024 |
3 min reading time
Giordano De Marzo 
Researcher in the Physics Department at Sapienza University
Key takeaways
  • With the advent of online platforms, opinions are becoming polarised on many subjects.
  • Recommendation algorithms, which recommend specific content likely to appeal to users, are one of the main causes of this.
  • Using analytical and numerical techniques, researchers have simulated the evolution of user preferences according to algorithmic recommendations.
  • By identifying these strategies, the study could help to develop less-polarising algorithms in the future by increasing user engagement.
  • This is another step towards creating a more balanced and inclusive online information ecosystem.

The num­ber of peo­ple hold­ing extreme views, on sub­jects such as pol­i­tics, reli­gion or cli­mate change – to cite just three exam­ples – has increased in recent years123. This “polar­i­sa­tion”, as it is called, is dan­ger­ous, as it could poten­tial­ly weak­en democ­ra­cy itself if allowed to spread unhin­dered. Online plat­forms such as social media play an impor­tant role in this con­text, but the mech­a­nisms by which they fos­ter polar­i­sa­tion are not yet ful­ly understood.

“Rec­om­men­da­tion algo­rithms pro­found­ly shape our dig­i­tal expe­ri­ence today, deter­min­ing the films we watch or the songs we lis­ten to,” explains Gior­dano De Mar­zo. These algo­rithms are wide­ly used by most of the web­sites we vis­it every day, the best-known exam­ples being the “sug­gest­ed for you” mes­sages on Face­book, the “rec­om­mend­ed items” on Ama­zon or Google’s PageR­ank sys­tem. They are designed to give us easy access to the con­tent most like­ly to inter­est us, and max­imise our engage­ment with the platform.

This is a sig­nif­i­cant step for­ward in cre­at­ing a more bal­anced and inclu­sive online infor­ma­tion ecosystem

A team of researchers led by Gior­dano De Mar­zo, from the Depart­ment of Physics at the Uni­ver­si­ty Sapien­za in Rome, Italy, has stud­ied how a col­lab­o­ra­tive user-to-user fil­ter­ing algo­rithm affects the behav­iour of a group of peo­ple repeat­ed­ly exposed to it. This type of rec­om­men­da­tion algo­rithm is rou­tine­ly used by online retail giants such as Ama­zon to iden­ti­fy new con­tent, based on past activ­i­ty, that will be of most inter­est to users. Using ana­lyt­i­cal and numer­i­cal tech­niques, the researchers were able to sim­u­late how the users’ con­tent pref­er­ences change in response to algo­rith­mic rec­om­men­da­tions. Their analy­ses revealed three dis­tinct regimes or ‘phas­es’ in the user-base’s state that trap peo­ple in so-called “fil­ter bubbles”.

These states depend on key fac­tors such as the “strength” with which the algo­rithm rec­om­mends items that are liked by sim­i­lar users, or that are pop­u­lar over­all. The study also iden­ti­fied strate­gies that allow an algo­rithm to pro­vide per­son­alised rec­om­men­da­tions with­out cre­at­ing fil­ter bub­bles. This could con­tribute to the devel­op­ment of less polar­is­ing algo­rithms in the future.

Collaborative filtering

Col­lab­o­ra­tive fil­ter­ing45 is one of the best-known and most wide­ly used rec­om­men­da­tion algo­rithms. It relies on the prin­ci­ple that the past behav­iour of users can be exploit­ed to iden­ti­fy new con­tent that they will enjoy the most. The down-side is that these algo­rithms can lead to a feed­back loop. This loop nat­u­ral­ly tends to bias future choic­es, reduc­ing the diver­si­ty of con­tent avail­able. It is a loop of this kind that leads to fil­ter bub­ble effects, where users are not exposed to new or dif­fer­ing per­spec­tives, but sim­ply to news and con­tent aligned with their exist­ing beliefs. In short, these loops con­tribute to “polar­i­sa­tion” . They are sim­i­lar to “echo cham­bers” , which have been more wide­ly stud­ied678. How­ev­er, the dif­fer­ence is that bub­bles are pro­duced by algo­rith­mi­cal­ly-biased rec­om­men­da­tions on online plat­forms, rather than from inter­ac­tion between like-mind­ed users.

In this new study, pub­lished in Phys­i­cal Review E, Gior­dano De Mar­zo and his col­leagues found that depend­ing on two para­me­ters, the strength of the sim­i­lar­i­ty bias (a) and the strength of the pop­u­lar­i­ty bias (b), a col­lab­o­ra­tive fil­ter­ing sys­tem can exist in three dif­fer­ent phas­es. These are dis­or­der, con­sen­sus and polar­i­sa­tion. Fur­ther­more, when both bias­es are suf­fi­cient­ly strong, the sys­tem forms polarised groups, lead­ing to the “fil­ter bub­ble” effect. For­tu­nate­ly, this dis­ad­van­tage can be avoid­ed at the bound­ary between dis­or­der and polar­i­sa­tion. Indeed, an algo­rithm at this bound­ary can pro­vide mean­ing­ful rec­om­men­da­tions with­out induc­ing opin­ion polar­i­sa­tion or trap­ping fil­ter bubbles.

“Our research pro­vides a sys­tem­at­ic approach to quan­ti­fy­ing and analysing the impact of col­lab­o­ra­tive user-user fil­ter­ing,” explains Gior­dano De Mar­zo. By employ­ing a sta­tis­ti­cal physics approach, we were able to sim­u­late and analyse how users’ con­tent pref­er­ences change in response to algo­rith­mic recommendations.” 

The new method relies on a com­bi­na­tion of math­e­mat­i­cal mod­el­ling and com­put­er sim­u­la­tions. “In par­tic­u­lar, we have exploit­ed tech­niques such as sto­chas­tic process­es the­o­ry, prob­a­bil­i­ty the­o­ry and polya urn mod­els (a fam­i­ly of urn mod­els that can be used to inter­pret many com­mon­ly-employed sta­tis­ti­cal mod­els). On the com­put­er side, we lever­aged Monte Car­lo sim­u­la­tions” , explains Gior­dano De Marzo.

Towards more effective recommendation algorithms

These analy­ses could con­tribute to the devel­op­ment of new method­olo­gies for design­ing effec­tive rec­om­men­da­tion algo­rithms, he adds. “By under­stand­ing the mech­a­nisms that lead to ‘fil­ter bub­bles’, we can devel­op sys­tems that favour a wide range of con­tent, there­by mit­i­gat­ing the risks of polar­i­sa­tion while enhanc­ing user engage­ment and con­tent diver­si­ty. This is a sig­nif­i­cant step for­ward in cre­at­ing a more bal­anced and inclu­sive online infor­ma­tion ecosystem” .

The researchers will now study the impact of inter­ac­tions between users (as com­mon­ly observed in social net­works) on rec­om­men­da­tion algo­rithms. “Adding this para­me­ter could con­sid­er­ably enrich our under­stand­ing of the inter­play between social dynam­ics and algo­rithm-dri­ven con­tent dis­tri­b­u­tion. This will pro­vide a more holis­tic view of dig­i­tal envi­ron­ments”, explains Gior­dano De Marzo.

They will also study the role of link rec­om­men­da­tion algo­rithms, that is, those that sug­gest friends we will link to. Final­ly, they are cur­rent­ly exploit­ing Large Lan­guage Mod­els for pow­er­ing more real­is­tic sim­u­la­tions. “These sim­u­la­tions will be the ide­al start­ing point for a more detailed under­stand­ing of online dynam­ics and rec­om­men­da­tion algo­rithms,” he concludes.

Isabelle Dumé
1“The par­ti­san divide on polit­i­cal val­ues grows even wider,” https://​www​.pewre​search​.org/​p​o​l​i​t​i​c​s​/​2​0​1​7​/​1​0​/​0​5​/​t​h​e​-​p​a​r​t​i​s​a​n​-​d​i​v​i​d​e​-​o​n​-​p​o​l​i​t​i​c​a​l​-​v​a​l​u​e​s​-​g​r​o​w​s​-​e​v​e​n​-​w​ider/ (2017).
2Uth­sav Chi­tra and Christo­pher Mus­co, “Ana­lyz­ing the impact of fil­ter bub­bles on social net­work polar­iza­tion,” in Pro­ceed­ings of the 13th Inter­na­tion­al Con­fer­ence on Web Search and Data Min­ing, WSDM ’20 (Asso­ci­a­tion for Com­put­ing Machin­ery, New York, NY, USA, 2020) p. 115–123.
3Michael Maes and Lukas Bischof­berg­er, “Will the per­son- aliza­tion of online social net­works fos­ter opin­ion polar­iza­tion?” Avail­able at SSRN 2553436 (2015).
4athan L Her­lock­er, Joseph A Kon­stan, and John Riedl, “Explain­ing col­lab­o­ra­tive fil­ter­ing rec­om­men­da­tions,” in Pro­ceed­ings of the 2000 ACM con­fer­ence on Com­put­er sup­port­ed coop­er­a­tive work (2000) pp. 241–250.
5Xiaoyuan Su and Taghi M Khosh­gof­taar, “A sur­vey of col­lab­o­ra­tive fil­ter­ing tech­niques,” Advances in arti­fi­cial intel­li­gence 2009 (2009), 10.1155/2009/421425.
6Mat­teo Cinel­li, Gian­mar­co De Fran­cis­ci Morales, Alessan­dro Galeazzi, Wal­ter Quat­tro­cioc­chi, and Michele Starni­ni, “The echo cham­ber effect on social media,” Pro­ceed­ings of the Nation­al Acad­e­my of Sci­ences 118 (2021), 10.1073/pnas.2023301118, https://​www​.pnas​.org/​c​o​n​t​e​n​t​/​1​1​8​/​9​/​e​2​0​2​3​3​0​1​1​1​8​.​f​u​l​l.pdf.
7Wes­ley Cota, Sil­vio C. Fer­reira, Romual­do Pas­tor-Sator­ras, and Michele Starni­ni, “Quan­ti­fy­ing echo cham­ber effects in infor­ma­tion spread­ing over polit­i­cal com­mu­ni­ca­tion net­works,” EPJ Data Sci­ence 8, 35 (2019).
8Pablo Barber´a, John T. Jost, Jonathan Nagler, Joshua A. Tuck­er, and Richard Bon­neau, “Tweet­ing from left to right: Is online polit­i­cal com­mu­ni­ca­tion more than an echo cham­ber?” Psy­cho­log­i­cal Sci­ence 26, 1531–1542 (2015), pMID: 26297377,https://​doi​.org/​1​0​.​1​1​7​7​/​0​9​5​6​7​9​7​6​1​5​5​94620.

Our world explained with science. Every week, in your inbox.

Get the newsletter