Do recommender systems shape our preferences?
This week we’re highlighting some work exploring whether recommender systems shape our preferences - by making us more predictable, by polarizing or radicalizing us, or by sorting us into filter bubbles.
Evidence from simulations
Hidden Incentives for Auto-Induced Distributional Shift
Recommender systems are most often optimized for predictive accuracy over some observable user behaviors like clicks or likes. This could create dangerous incentives for the algorithms to change user behavior, in particular by making users more predictable.
These concerns are often dismissed for the common case of supervised offline training (in contrast to online RL training in which case the incentive to change user behavior is undeniable). When a system is trained offline on behavioral data, with no access to real users, it can’t learn to change user behavior - in the language of this paper, the incentive to modify users is hidden from the system. However, it turns out these hidden incentives can suddenly become revealed (i.e. the system starts shaping user preferences to be more predictable) when we introduce a more powerful learning algorithm (in this case an evolutionary meta-learning algorithm).
The paper demonstrates this effect in three simulated environments, but does this ever occur in practice? It will certainly become more of a problem when meta-learning becomes widely used. Even now, you could think of the ML engineers tuning supervised learning systems as implementing a meta-learning feedback loop. If a change makes users more predictable, metrics go up, which encourages similar changes in the future.
Degenerate Feedback Loops in Recommender Systems
This paper is the most rigorous attempt I know to formalize the notions of “echo chamber” and “filter bubble” mathematically. It does so by introducing a mathematical model of user interests as a dynamical system, and formalizing the more general notion of “degeneracy”, which is when user interests diverge to an arbitrarily large extent from their initial setting. When degeneracy occurs due to user choice, we call it an echo chamber; when it occurs due to recommender system intervention, we call it a filter bubble. They do some experiments on a simulated recommendation environment to study what techniques could be used to slow the rate of degeneration. The ones that help are adding random exploration (unsurprisingly), growing the candidate pool of items over time, and using bandit algorithms like Upper Confidence Bound which are forced to actually try recommending new items (unlike e.g. Thompson Sampling).
Evidence from real recommenders
Exploring the filter bubble - Longitudinal study of Movielens, a collaborative filtering system for movie recommendations, showing that users get increasingly narrow recommendations over time. Might be one of the earliest demonstrations of the filter bubble effect in practice.
Facebook news and (de)polarization: reinforcing spirals in the 2016 US election
This paper attempted to apply a Reinforcing Spirals Model (RSM) to users’ engagement with political content on Facebook and affective polarization. The authors concluded that engagement with political content on Facebook, surprisingly, had a de-polarizing effect. However, this de-polarizing effect was achieved for a very specific sub-group of people: voters who consumed relatively little Facebook news early on in the campaign and held weaker partisan positions to begin with. The authors note that these users may have been in fact seeking out less news from Facebook, as opposed to other social content, and relatively more of their interactions with news content may have been surfaced by recommender systems. Because the paper applies several social science frameworks that were previously unfamiliar to me, I would want to research the methods used before making any conclusions about the size of this effect. Though notably, Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization finds evidence for a similar de-polarizing effect in YouTube recommendations.
Related Reading
Deconvolving Feedback Loops in Recommender Systems
This paper attempts to “undo” the effect of recommender system manipulation by predicting which elements in a collaborative filtering matrix, that is, which ratings given by users of a content platform (e.g. Netflix) to individual pieces of content or products (e.g. products on Amazon, songs on Spotify), were observed because of a recommender system’s intervention, as opposed to having been organically generated by a user.
Polarization on Social Media: When Group Dynamics Leads to Societal Divides describes an agent-based model of how the design of social media systems can lead to polarization and radicalization.
In Other News
Credible Neutrality As A Guiding Principle
In a recent essay Vitalik Buterin describes “Credible Neutrality”, a set of principles for system design that enhance user trust and provide legitimacy to a mechanism’s outputs (where a recommender system can be considered a mechanism converting user inputs into recommendations). I found his four pillars of credible neutrality a really useful framework for thinking about what makes the output of systems legitimate in the eyes of the humans they help.
Written by Ivan Vendrov and Chris Painter. Please send us any interesting links related to recommender alignment, and see you next week!