Megaphone Symbolizing Marketing And Sales. Сoncept Networking Events, Sales Strategies, Branding And Advertising, Lead Generation, Sales Funnel Optimization
π Digital π Society
Social media: a new paradigm for public opinion

Are online recommendation algorithms polarising users’ views ?

with Giordano De Marzo , Researcher in the Physics Department at Sapienza University
On January 24th, 2024 |
3 min reading time
Avatar
Giordano De Marzo 
Researcher in the Physics Department at Sapienza University
Key takeaways
  • With the advent of online platforms, opinions are becoming polarised on many subjects.
  • Recommendation algorithms, which recommend specific content likely to appeal to users, are one of the main causes of this.
  • Using analytical and numerical techniques, researchers have simulated the evolution of user preferences according to algorithmic recommendations.
  • By identifying these strategies, the study could help to develop less-polarising algorithms in the future by increasing user engagement.
  • This is another step towards creating a more balanced and inclusive online information ecosystem.

The num­ber of people hol­ding extreme views, on sub­jects such as poli­tics, reli­gion or cli­mate change – to cite just three examples – has increa­sed in recent years123. This “pola­ri­sa­tion”, as it is cal­led, is dan­ge­rous, as it could poten­tial­ly wea­ken demo­cra­cy itself if allo­wed to spread unhin­de­red. Online plat­forms such as social media play an impor­tant role in this context, but the mecha­nisms by which they fos­ter pola­ri­sa­tion are not yet ful­ly understood.

“Recom­men­da­tion algo­rithms pro­found­ly shape our digi­tal expe­rience today, deter­mi­ning the films we watch or the songs we lis­ten to,” explains Gior­da­no De Mar­zo. These algo­rithms are wide­ly used by most of the web­sites we visit eve­ry day, the best-known examples being the “sug­ges­ted for you” mes­sages on Face­book, the “recom­men­ded items” on Ama­zon or Goo­gle’s Page­Rank sys­tem. They are desi­gned to give us easy access to the content most like­ly to inter­est us, and maxi­mise our enga­ge­ment with the platform.

This is a signi­fi­cant step for­ward in crea­ting a more balan­ced and inclu­sive online infor­ma­tion ecosystem

A team of resear­chers led by Gior­da­no De Mar­zo, from the Depart­ment of Phy­sics at the Uni­ver­si­ty Sapien­za in Rome, Ita­ly, has stu­died how a col­la­bo­ra­tive user-to-user fil­te­ring algo­rithm affects the beha­viour of a group of people repea­ted­ly expo­sed to it. This type of recom­men­da­tion algo­rithm is rou­ti­ne­ly used by online retail giants such as Ama­zon to iden­ti­fy new content, based on past acti­vi­ty, that will be of most inter­est to users. Using ana­ly­ti­cal and nume­ri­cal tech­niques, the resear­chers were able to simu­late how the users’ content pre­fe­rences change in res­ponse to algo­rith­mic recom­men­da­tions. Their ana­lyses revea­led three dis­tinct regimes or ‘phases’ in the user-base’s state that trap people in so-cal­led “fil­ter bubbles”.

These states depend on key fac­tors such as the “strength” with which the algo­rithm recom­mends items that are liked by simi­lar users, or that are popu­lar ove­rall. The stu­dy also iden­ti­fied stra­te­gies that allow an algo­rithm to pro­vide per­so­na­li­sed recom­men­da­tions without crea­ting fil­ter bubbles. This could contri­bute to the deve­lop­ment of less pola­ri­sing algo­rithms in the future.

Collaborative filtering

Col­la­bo­ra­tive fil­te­ring45 is one of the best-known and most wide­ly used recom­men­da­tion algo­rithms. It relies on the prin­ciple that the past beha­viour of users can be exploi­ted to iden­ti­fy new content that they will enjoy the most. The down-side is that these algo­rithms can lead to a feed­back loop. This loop natu­ral­ly tends to bias future choices, redu­cing the diver­si­ty of content avai­lable. It is a loop of this kind that leads to fil­ter bubble effects, where users are not expo­sed to new or dif­fe­ring pers­pec­tives, but sim­ply to news and content ali­gned with their exis­ting beliefs. In short, these loops contri­bute to “pola­ri­sa­tion” . They are simi­lar to “echo cham­bers” , which have been more wide­ly stu­died678. Howe­ver, the dif­fe­rence is that bubbles are pro­du­ced by algo­rith­mi­cal­ly-bia­sed recom­men­da­tions on online plat­forms, rather than from inter­ac­tion bet­ween like-min­ded users.

In this new stu­dy, publi­shed in Phy­si­cal Review E, Gior­da­no De Mar­zo and his col­leagues found that depen­ding on two para­me­ters, the strength of the simi­la­ri­ty bias (a) and the strength of the popu­la­ri­ty bias (b), a col­la­bo­ra­tive fil­te­ring sys­tem can exist in three dif­ferent phases. These are disor­der, consen­sus and pola­ri­sa­tion. Fur­ther­more, when both biases are suf­fi­cient­ly strong, the sys­tem forms pola­ri­sed groups, lea­ding to the “fil­ter bubble” effect. For­tu­na­te­ly, this disad­van­tage can be avoi­ded at the boun­da­ry bet­ween disor­der and pola­ri­sa­tion. Indeed, an algo­rithm at this boun­da­ry can pro­vide mea­ning­ful recom­men­da­tions without indu­cing opi­nion pola­ri­sa­tion or trap­ping fil­ter bubbles.

“Our research pro­vides a sys­te­ma­tic approach to quan­ti­fying and ana­ly­sing the impact of col­la­bo­ra­tive user-user fil­te­ring,” explains Gior­da­no De Mar­zo. By employing a sta­tis­ti­cal phy­sics approach, we were able to simu­late and ana­lyse how users’ content pre­fe­rences change in res­ponse to algo­rith­mic recommendations.” 

The new method relies on a com­bi­na­tion of mathe­ma­ti­cal model­ling and com­pu­ter simu­la­tions. “In par­ti­cu­lar, we have exploi­ted tech­niques such as sto­chas­tic pro­cesses theo­ry, pro­ba­bi­li­ty theo­ry and polya urn models (a fami­ly of urn models that can be used to inter­pret many com­mon­ly-employed sta­tis­ti­cal models). On the com­pu­ter side, we leve­ra­ged Monte Car­lo simu­la­tions” , explains Gior­da­no De Marzo.

Towards more effective recommendation algorithms

These ana­lyses could contri­bute to the deve­lop­ment of new metho­do­lo­gies for desi­gning effec­tive recom­men­da­tion algo­rithms, he adds. “By unders­tan­ding the mecha­nisms that lead to ‘fil­ter bubbles’, we can deve­lop sys­tems that favour a wide range of content, the­re­by miti­ga­ting the risks of pola­ri­sa­tion while enhan­cing user enga­ge­ment and content diver­si­ty. This is a signi­fi­cant step for­ward in crea­ting a more balan­ced and inclu­sive online infor­ma­tion ecosystem” .

The resear­chers will now stu­dy the impact of inter­ac­tions bet­ween users (as com­mon­ly obser­ved in social net­works) on recom­men­da­tion algo­rithms. “Adding this para­me­ter could consi­de­ra­bly enrich our unders­tan­ding of the inter­play bet­ween social dyna­mics and algo­rithm-dri­ven content dis­tri­bu­tion. This will pro­vide a more holis­tic view of digi­tal envi­ron­ments”, explains Gior­da­no De Marzo.

They will also stu­dy the role of link recom­men­da­tion algo­rithms, that is, those that sug­gest friends we will link to. Final­ly, they are cur­rent­ly exploi­ting Large Lan­guage Models for powe­ring more rea­lis­tic simu­la­tions. “These simu­la­tions will be the ideal star­ting point for a more detai­led unders­tan­ding of online dyna­mics and recom­men­da­tion algo­rithms,” he concludes.

Isabelle Dumé
1“The par­ti­san divide on poli­ti­cal values grows even wider,” https://​www​.pewre​search​.org/​p​o​l​i​t​i​c​s​/​2​0​1​7​/​1​0​/​0​5​/​t​h​e​-​p​a​r​t​i​s​a​n​-​d​i​v​i​d​e​-​o​n​-​p​o​l​i​t​i​c​a​l​-​v​a​l​u​e​s​-​g​r​o​w​s​-​e​v​e​n​-​w​ider/ (2017).
2Uth­sav Chi­tra and Chris­to­pher Mus­co, “Ana­ly­zing the impact of fil­ter bubbles on social net­work pola­ri­za­tion,” in Pro­cee­dings of the 13th Inter­na­tio­nal Confe­rence on Web Search and Data Mining, WSDM ’20 (Asso­cia­tion for Com­pu­ting Machi­ne­ry, New York, NY, USA, 2020) p. 115–123.
3Michael Maes and Lukas Bischof­ber­ger, “Will the per­son- ali­za­tion of online social net­works fos­ter opi­nion pola­ri­za­tion?” Avai­lable at SSRN 2553436 (2015).
4athan L Her­lo­cker, Joseph A Kons­tan, and John Riedl, “Explai­ning col­la­bo­ra­tive fil­te­ring recom­men­da­tions,” in Pro­cee­dings of the 2000 ACM confe­rence on Com­pu­ter sup­por­ted coope­ra­tive work (2000) pp. 241–250.
5Xiaoyuan Su and Taghi M Kho­sh­gof­taar, “A sur­vey of col­la­bo­ra­tive fil­te­ring tech­niques,” Advances in arti­fi­cial intel­li­gence 2009 (2009), 10.1155/2009/421425.
6Mat­teo Cinel­li, Gian­mar­co De Fran­cis­ci Morales, Ales­san­dro Galeaz­zi, Wal­ter Quat­tro­cioc­chi, and Michele Star­ni­ni, “The echo cham­ber effect on social media,” Pro­cee­dings of the Natio­nal Aca­de­my of Sciences 118 (2021), 10.1073/pnas.2023301118, https://​www​.pnas​.org/​c​o​n​t​e​n​t​/​1​1​8​/​9​/​e​2​0​2​3​3​0​1​1​1​8​.​f​u​l​l.pdf.
7Wes­ley Cota, Sil­vio C. Fer­rei­ra, Romual­do Pas­tor-Sator­ras, and Michele Star­ni­ni, “Quan­ti­fying echo cham­ber effects in infor­ma­tion sprea­ding over poli­ti­cal com­mu­ni­ca­tion net­works,” EPJ Data Science 8, 35 (2019).
8Pablo Barber´a, John T. Jost, Jona­than Nagler, Joshua A. Tucker, and Richard Bon­neau, “Twee­ting from left to right : Is online poli­ti­cal com­mu­ni­ca­tion more than an echo cham­ber?” Psy­cho­lo­gi­cal Science 26, 1531–1542 (2015), pMID : 26297377,https://​doi​.org/​1​0​.​1​1​7​7​/​0​9​5​6​7​9​7​6​1​5​5​94620.

Support accurate information rooted in the scientific method.

Donate