The number of people holding extreme views, on subjects such as politics, religion or climate change – to cite just three examples – has increased in recent years123. This “polarisation”, as it is called, is dangerous, as it could potentially weaken democracy itself if allowed to spread unhindered. Online platforms such as social media play an important role in this context, but the mechanisms by which they foster polarisation are not yet fully understood.
“Recommendation algorithms profoundly shape our digital experience today, determining the films we watch or the songs we listen to,” explains Giordano De Marzo. These algorithms are widely used by most of the websites we visit every day, the best-known examples being the “suggested for you” messages on Facebook, the “recommended items” on Amazon or Google’s PageRank system. They are designed to give us easy access to the content most likely to interest us, and maximise our engagement with the platform.
This is a significant step forward in creating a more balanced and inclusive online information ecosystem
A team of researchers led by Giordano De Marzo, from the Department of Physics at the University Sapienza in Rome, Italy, has studied how a collaborative user-to-user filtering algorithm affects the behaviour of a group of people repeatedly exposed to it. This type of recommendation algorithm is routinely used by online retail giants such as Amazon to identify new content, based on past activity, that will be of most interest to users. Using analytical and numerical techniques, the researchers were able to simulate how the users’ content preferences change in response to algorithmic recommendations. Their analyses revealed three distinct regimes or ‘phases’ in the user-base’s state that trap people in so-called “filter bubbles”.
These states depend on key factors such as the “strength” with which the algorithm recommends items that are liked by similar users, or that are popular overall. The study also identified strategies that allow an algorithm to provide personalised recommendations without creating filter bubbles. This could contribute to the development of less polarising algorithms in the future.
Collaborative filtering45 is one of the best-known and most widely used recommendation algorithms. It relies on the principle that the past behaviour of users can be exploited to identify new content that they will enjoy the most. The down-side is that these algorithms can lead to a feedback loop. This loop naturally tends to bias future choices, reducing the diversity of content available. It is a loop of this kind that leads to filter bubble effects, where users are not exposed to new or differing perspectives, but simply to news and content aligned with their existing beliefs. In short, these loops contribute to “polarisation” . They are similar to “echo chambers” , which have been more widely studied678. However, the difference is that bubbles are produced by algorithmically-biased recommendations on online platforms, rather than from interaction between like-minded users.
In this new study, published in Physical Review E, Giordano De Marzo and his colleagues found that depending on two parameters, the strength of the similarity bias (a) and the strength of the popularity bias (b), a collaborative filtering system can exist in three different phases. These are disorder, consensus and polarisation. Furthermore, when both biases are sufficiently strong, the system forms polarised groups, leading to the “filter bubble” effect. Fortunately, this disadvantage can be avoided at the boundary between disorder and polarisation. Indeed, an algorithm at this boundary can provide meaningful recommendations without inducing opinion polarisation or trapping filter bubbles.
“Our research provides a systematic approach to quantifying and analysing the impact of collaborative user-user filtering,” explains Giordano De Marzo. “By employing a statistical physics approach, we were able to simulate and analyse how users’ content preferences change in response to algorithmic recommendations.”
The new method relies on a combination of mathematical modelling and computer simulations. “In particular, we have exploited techniques such as stochastic processes theory, probability theory and polya urn models (a family of urn models that can be used to interpret many commonly-employed statistical models). On the computer side, we leveraged Monte Carlo simulations” , explains Giordano De Marzo.
Towards more effective recommendation algorithms
These analyses could contribute to the development of new methodologies for designing effective recommendation algorithms, he adds. “By understanding the mechanisms that lead to ‘filter bubbles’, we can develop systems that favour a wide range of content, thereby mitigating the risks of polarisation while enhancing user engagement and content diversity. This is a significant step forward in creating a more balanced and inclusive online information ecosystem” .
The researchers will now study the impact of interactions between users (as commonly observed in social networks) on recommendation algorithms. “Adding this parameter could considerably enrich our understanding of the interplay between social dynamics and algorithm-driven content distribution. This will provide a more holistic view of digital environments”, explains Giordano De Marzo.
They will also study the role of link recommendation algorithms, that is, those that suggest friends we will link to. Finally, they are currently exploiting Large Language Models for powering more realistic simulations. “These simulations will be the ideal starting point for a more detailed understanding of online dynamics and recommendation algorithms,” he concludes.