Optimizing for What? Algorithmic Amplification and Society | Knight First Amendment Institute
On April 28-29, 2023, the Knight Institute will host a symposium to explore how online amplification works and to consider interventions that would mitigate some of the harms caused by amplification, or allow us to take fuller advantage of the benefits. The symposium, “Optimizing for What? Algorithmic Amplification and Society,” is a collaboration between the Knight Institute and the Institute’s Visiting Senior Research Scientist
Arvind Narayanan
It will take place in-person at Columbia University and online.
In-person guests should be prepared to show proof of vaccination upon entry.
Event Materials
Program
Algorithmic Amplification and Society blog posts
Understanding Social Media Recommendation Algorithms
Understanding Social Media Recommendation Algorithms: A discussion guide
Visualizing Virality
Schedule
Friday April 28
Alfred Lerner Hall, Roone Arledge Cinema, or Online
2920 Broadway, New York, NY 10027
9:00AM – 9:10AM EDT
Welcome
Jameel Jaffer
, Knight First Amendment Institute at Columbia University
9:10AM – 9:40AM EDT
Keynote and conversation
Alondra Nelson
, Institute for Advanced Study
Jameel Jaffer
, Knight First Amendment Institute at Columbia University
9:40AM – 10:50AM EDT
Panel 1: Level setting
This panel will set the stage by discussing how platforms and platform algorithms work, laying out the issues at stake, reviewing recent developments, and looking at the legal questions relevant to possible reform options.
Panelists
Tarleton Gillespie
, Microsoft Research New England
Daphne Keller
Stanford University
Tomo Lazovich
, Northeastern University
Moderator
Arvind Narayanan
, Princeton University and the Knight First Amendment Institute at Columbia University
Do Not Recommend? Reduction as a Form of Content Moderation
Tarleton Gillespie
Public debate about content moderation has overwhelmingly focused on removal: social media platforms deleting content and suspending users, or opting not to do so. However, removal is not the only available remedy. Reducing the visibility of problematic content is becoming a commonplace element of platform governance. Platforms use machine learning classifiers to identify content they judge misleading enough, risky enough, or offensive enough that, while it does not warrant removal according to the site guidelines, warrants demoting them in algorithmic rankings and recommendations. In this essay, I document this shift and explain how reduction works. I then raise questions about what it means to use recommendation as a means of content moderation.
The Myth of “The Algorithm”: A system-level view of algorithmic amplification
Kristian Lum and Tomo Lazovich
As people consume more content delivered by recommender systems, it has become increasingly important to understand how content is amplified by these recommendations. Much of the recent work to study algorithmic amplification implicitly assumes that
the algorithm
is a single machine learning model acting on an immutable corpus of content to be recommended. Additionally, there is an inherent assumption of a neutral “nonalgorithmic” baseline against which to compare. In actuality, there are several other components of the system that are not traditionally considered part of
the
algorithm
that influence what ends up on a user’s content feed and potentially corrupt the neutrality of any baseline measurement: upstream editorial policies or decisions that determine what content is eligible to be ranked by the algorithmic recommender system, including NSFW and toxicity filtering; peripheral models that shape the evolution of the social graph, such as account recommendation models; and explicit user preferences and behaviors. All of these components affect what ultimately gets amplified and can confound how we measure amplification.
Our proposed paper has three aims. First, we will enumerate some of these components that influence algorithmic amplification. Second, we will explore how the assumption of a “neutral” baseline that was not shaped by prior behavior of these components, particularly the “reverse chronological” content feed, can lead to poor measurement of amplification. Third, we will suggest some paths forward for measurement and mitigation that address the same concerns that underlie the recent discourse around algorithmic amplification but do not rely on the existence of a neutral baseline. We hope this work will be a call to action to the research community to consider otherwise overlooked areas that greatly influence how content is amplified on social platforms, and we see this workshop as an opportunity to gather input from the community on these areas.
Amplification and Its Discontents
Daphne Keller
There is a popular line of reasoning in platform regulation discussions today that argues, “Platforms aren’t responsible for what their users say, but they are responsible for what the platforms themselves choose to amplify.” This provides a seemingly simple hook for regulating algorithmic amplification. However, for lawyers or policymakers trying to set rules for disinformation, hate speech, and other harmful or illegal content online, focusing on amplification won’t make life any easier. It may increase, rather than decrease, the number of problems to be solved before arriving at well-crafted regulation. Models for regulating amplification have a great deal in common with the more familiar models from intermediary liability law, which defines platforms’ responsibility for content posted by users. As with ordinary intermediary liability laws, the biggest questions may be practical: Who defines the rules for online speech, who enforces them, what incentives do they have, and what outcomes should we expect as a result? And as with those laws, some of the most important considerations—and, ultimately, limits on Congress’s power—come from the First Amendment.
In this essay, I will lay out why “regulating amplification” to restrict distribution of harmful or illegal content is hard. My goal in doing so is to keep smart people from wasting their time devising bad laws, and speed the day when we can figure out good ones. I will draw in part on novel regulatory models that are more developed in Europe. My analysis, though, will primarily use U.S. First Amendment law. I will conclude that many models for regulating amplification face serious constitutional hurdles, but that a few—grounded in content-neutral goals, including privacy or competition—may offer paths forward.
10:50AM – 11:05AM EDT
Break
11:05AM – 12:35PM EDT
Panel 2: Audits
The panel will consider how algorithmic recommendations affect what real users see on social media, with deep dives into Twitter and YouTube. Panelists will discuss how platform design affects content creators and talk about research methods and ways to enable more audit research.
Panelists
Fabian Baumann
, Max Planck Institute for Human Development
William J.
Brady
, Northwestern University
Smitha Milli
, Cornell Tech
Inioluwa Deborah Raji
, University of California, Berkeley
Moderator
Laura Edelson
, New York University
Field Experiments on the Impact of Algorithmic Curation on Content Consumption Behavior
Fabian Baumann and Philipp Lorenz-Spreen
Algorithms like search engines or recommender systems have the potential to decisively influence the drivers of cultural evolution, e.g., in information search and sharing. This influence is particularly evident on social media platforms, where direct peer-to-peer communication is typically mediated by algorithmically curated feeds that are optimized for engagement and that provide users with personalized content. Previously, algorithmic personalization on social media has been studied through a political lens focusing on the political content that users get recommended. Here we will take a more general perspective and focus on how recommended content (i.e., exposure) and user behavior (i.e., engagement) interplay across the broader cultural spectrum. The empirical study of the impact of algorithmic recommendations on culture poses fundamental challenges that connect to the classical trade-off between ecological validity and experimental control: When studied with observational data, the inherent coupling of algorithms and human behavior is impossible to disentangle, and when studied in the lab, the encountered content is often artificial and does not mimic realistic exposure.
Here, we will outline an approach to strike a good balance between the two, namely with field experiments on existing social media platforms. As one example, we present an experimental paradigm that makes use of Twitter’s feature to switch between algorithmic (“For You”) and chronological feed as an experimental manipulation. Screen recordings can be used for measuring the resulting exposure to content and Twitter’s Application Programming Interface, or API, for monitoring the subsequent user behavior. While users in “reverse-chron” mode only experience their friends’ activity, the “For You” feed adds personalized and presumably more engaging content from outside their social circle. After incentivizing participants to switch to one or the other feed and record their screen while browsing Twitter, we quantify the change of their behavior along various cultural dimensions. For instance, we can examine how users’ content exposure changes, if users’ engagement becomes more passive or active, and if the distribution of content becomes more heterogeneous or not. Our empirical results will help to get an ecologically valid measure of the causal effect of algorithmic curation on the statistical distribution of consumed and produced content.
Algorithm-Mediated Social Learning in Online Social Networks
William J. Brady, Joshua Conrad Jackson, Björn Lindström, and M.J. Crockett
Humans rely heavily on social learning to navigate the social and physical world. For the first time in history, we are interacting in online social networks where content algorithms filter social information, yet little is known about how these algorithms influence our social learning. In this review, we synthesize emerging insights into this “algorithm-mediated social learning” and propose a framework that examines its consequences in terms of functional misalignment. We argue that the functions of human social learning and the goals of content algorithms are misaligned in practice. Algorithms exploit basic human social learning biases (i.e., a bias toward prestigious, in-group, moral, and emotional information, or PRIME information) as a side effect of their goals to sustain attention and maximize engagement on platforms. Social learning biases function to promote adaptive behaviors that foster cooperation and collective problem-solving. However, when social learning biases are exploited by algorithms, PRIME information becomes amplified in the digital social environment in ways that can stimulate conflict and spread misinformation. We show how this problem is ultimately driven by human-algorithm interactions where observational and reinforcement learning exacerbate algorithmic amplification, and how it may even escalate to impact cultural evolution. Finally, we discuss practical solutions for reducing functional misalignment in human-algorithm interactions via strategies that help algorithms promote more diverse and contextually sensitive information environments.
Twitter’s Algorithm: Amplifying Anger, Animosity, and Affective Polarization
Smitha Milli, Micah Carroll, Sashrika Pandey, Yike Wang, and Anca Dragan
As social media continues to have a significant influence on public opinion, understanding the impact of the machine learning algorithms that filter and curate content is crucial. However, existing studies have yielded inconsistent results, potentially due to limitations such as reliance on observational methods, use of simulated rather than real users, restriction to specific types of content, or internal access requirements that may create conflicts of interest. To overcome these issues, we conducted a pre-registered controlled experiment on Twitter's algorithm without internal access. The key to our design was to, for a large group of active Twitter users, simultaneously collect (a) the tweets the personalized algorithm shows, and (b) the tweets the user would have seen if they were just shown the latest tweets from people they follow; we then surveyed users about both sets of tweets in a random order. Our results indicate that the algorithm amplifies emotional content, especially those expressing anger and out-group animosity. Furthermore, reading political tweets from the algorithm leads readers to perceive their political in-group more positively and their political out-group more negatively. Interestingly, while readers say they prefer tweets curated by the algorithm in general, they are
less
likely to prefer algorithm-selected political tweets. Overall, our study provides important insights into the impact of social media ranking algorithms, with implications for shaping public discourse and democratic engagement.
Cycles of Symbol Production on Online Platforms
Inioluwa Deborah Raji, Fernando Diaz, and Irene Lo
While much of the emphasis in the current literature on algorithmic amplification focuses on how online platforms might distort content consumption patterns, less attention has been paid to how such platforms influence content creator actions and outputs.
Existing theoretical models of content producer dynamics on online platforms tend to be limited, anchored to similar narrow assumptions. Notably, past work assumes that online platforms operate mainly as content distributors and that content providers do not interact while competing for the finite attention of consumers. However, on many social media platforms in particular, this is not practically the case—producers effectively operate as just another class of users, interacting and influencing each other directly and indirectly, as well as adaptively updating their content at varying rates of production in response. In this work, we theorize that content creators on social media platforms do not just unilaterally compete but can in fact
collude
in various instances in order to amplify each other and minimize (rather than maximize) the diversity of the content available on an online platform, disrupting individualized amplification schemes.
We go further to explore the influence of the design space of various online platforms on the cycle of content homogenization and diversification. We find that platform characteristics such as user experience features, content discovery heuristics, and monetization schemes factor heavily into the degree of content creator interactions and collusion; the adaptability and rate of content production; and other creator norms, which in turn determine the length and nature of content homogenization cycles. We hope to support this finding with empirical evidence from TikTok, Facebook and YouTube.
12:35PM – 2:00PM EDT
Lunch
2:00PM – 3:10PM EDT
Panel 3: Normative questions
How do algorithmic platforms distribute attention and shape social relations? How have they influenced the arts? The public square? What makes algorithmic amplification wrongful? What are the moral and political responsibilities of platforms?
Panelists
Annie Dorsen
, Independent Artist
Benjamin Laufer
, Cornell Tech
Seth Lazar
, Australian National University
Moderator
Katy Glenn Bass
, Knight First Amendment Institute at Columbia University
The Work of Art in the Age of Digital Commodification: An analysis of the emerging digital political economy of the performing arts
Sam Gill and Annie Dorsen
This paper critically analyzes how digital technology may influence the performing arts. It does this by examining in some detail the conceptual foundations of the digital commodity form, and explores how the unique features and exigencies of digital commodification may influence a range of forms of creative expression—drawing in particular on the discourse surrounding the emergent “creator economy.” The piece argues specifically that the digital commodity form is already having five impacts on the performing arts: (1) eroding the boundaries of art as a professional practice, (2) obliterating the line between creative producer and audience, (3) sublimating the aesthetic productive constraints and choices immanent in preset technologies, (4) replacement of legacy gatekeepers with digital operators, and (5) the deinstitutionalization of creative labor. As a result of these changes, the paper theorizes that the absorption of the performing arts into commercially-driven digital systems will begin to reduce artistic and creative expression to homogenized, interchangeable content that has been shaped before inception by economic imperatives focused on human attention and engagement. The piece further worries that these shifts will corrode and overtake legacy cultural institutions, thereby eliminating any notion of authoritative aesthetic discernment and unleashing an intensification of creative labor similar to that seen in other sectors of the now digitized economy. It concludes with some reflections on generative artificial intelligence and a review of critical questions facing artists and cultural institutions as well as scholars and critics analyzing the rise of digital technology.
What Makes Algorithmic Amplification Wrongful?
Benjamin Laufer and Helen Nissenbaum
Increasingly concerned about the way in which content spreads on the internet, scholars reach for the concept of algorithmic amplification (AA) as both an explanation and a warning. Although these researchers frequently acknowledge the metaphorical and conceptual haziness around the term, they continue to rely on it to carry both descriptive and normative intent. In itself, haziness need not disqualify a concept, except when it hides substantive assumptions with decisive normative implications.
This paper offers foundational work to give AA conceptual precision and normative teeth. First, it resuscitates the historical context around the meaning of
amplify
, a transitive concept from signal processing and system dynamics. It then turns to the normative question: When is AA wrongful? A sound account of what makes amplification problematic is a necessary precondition for discussing what to do about it.
Research has found that AA may bring about negative social impacts including
disinformation
bias
, and
extremism
. These problems, tied to content rather than process, are harmful consequences of AA, but they are not constitutive. At the root of wrongful AA is the deterioration of existing trustworthy processes for justification and legitimation. Algorithmic decision-making can disrupt or distort these processes. By shattering long-standing norms crucial for maintaining a common stock of knowledge, AA can undermine democracy. Therefore, we contend that AA is problematic when information is distributed according to processes that were not arrived at through legitimate social deliberation.
Platform-mediated internet communications are particularly prone to wrongful forms of AA, for which platforms ought to be held responsible. Where we believe AA to be wrongful, we will demonstrate the mechanisms causing harm in the cases of climate science communication and vaccination campaigns.
Communicative Justice and the Distribution of Attention
Seth Lazar
I argue, first, that algorithmic intermediaries govern the digital public sphere through their architecture, amplification algorithms, and moderation practices, and that they have a responsibility to do so better. This means more than just enumerating and responding to pathologies such as misinformation, radicalization, and abuse. We also need a positive ideal to aim at. Political philosophy should offer such an ideal, but it tells us only when not to interfere in free speech, not how to shape public communication and distribute attention. In response, I introduce a new theory of communicative justice: an account of the communicative interests that those who govern the digital public sphere should promote, and the democratic egalitarian norms by which their doing so should be constrained. This can guide us in shaping public communication and distributing attention, in balancing the governing responsibilities of private and public actors, and in striving for procedural legitimacy in governance of the digital public sphere.
3:10PM – 3:30PM EDT
Break
3:30PM – 4:50PM EDT
Panel 4: Reform part 1
Panelists will discuss various ideas for reforming, including nutrition labels, friction, algorithmic interventions, and decentralized alternatives, with a deep dive into one particular area: how to dampen conflict feedback loops.
Panelists
Luca Belli
, Sator Labs and University of California, Berkeley
Brett Frischmann
, Villanova University
Ravi Iyer
, Psychology of Technology Institute
Yoel Roth
, University of California, Berkeley
Moderator
Camille François
, Columbia University
What's in an Algorithm? Empowering users through nutrition labels for recommender systems
Luca Belli and Marlena Wisniak
Concerns around algorithmic amplification rightfully involve how they operate, particularly as regards the harms they produce when optimized for engagement. Yet to effectively address adverse impacts of amplification on human rights and civic space, we need a nuanced and commonly agreed upon definition thereof.
Often amplification is treated as an innate property of an algorithmic system, maybe hard coded by the designers and developers to reflect their own values and (usually profit-driven) objectives. While human input certainly shapes what such models are optimized for, the reality is much more complex, as these sociotechnical systems respond to the users’ behavior itself.
A corollary to defining amplification is to measure it. We argue that algorithmic amplification cannot be measured along one dimension only; rather it is a complex phenomenon that could be better understood via bringing multiple metrics together. Meaningful engagement with external stakeholders, especially marginalized groups and those living in the Global South, is urgently needed to map and understand diverse metrics and their limitations. Furthermore, amplification is relative to a baseline: Defining this common baseline is a pre-requirement that is often missed.
We propose to introduce “nutrition labels” for recommender models. Such a collection of agreed upon metrics could be useful to understand how these systems operate and ultimately ensure that their use protects and promotes human rights. Aiming to spark an inclusive conversation on how to measure amplification—with participation from civil society, academia, policymakers, international organizations, and the private sector—we offer a couple suggestions for how such measures could look.
Our research focuses on recommender systems for social media timelines, exploring metrics that would not only be relevant for today’s dominant platforms, but also for alternative and emerging models such as the “Fediverse” and web3 technologies.
How Friction-in-Design Moderates, Amplifies, and Dictates Speech and Conduct
Brett Frischmann and Paul Ohm
Besides algorithmic determinations and human decisions to emphasize one speaker, message, product, or service over another, amplification is often the product of decisions to remove or inject friction in the design of digital interfaces and platforms. Thus, amplification (and prioritization and optimization) should be understood and evaluated in terms of countervailing design decisions regarding different types and degrees of friction. We are interested in optimization and amplification not only for speech and content but also as it increasingly shapes and dictates behavior and conduct.
Our project connects to an emerging literature that considers the roles friction plays in the design of platforms, software, and other technological systems, as a means to protect values such as security, privacy, competition, and consumer protection. We have written some foundational works on friction-in-design.
We consider case studies from social media platform design that highlight the roles friction play in amplification and optimization. TikTok’s infinite scroll removes friction to increase engagement. WhatsApp’s limits on frequently forwarded messages use friction to reduce virality. Twitter warns users to reconsider retweeting links to unread articles.
Drawing on these case studies, we explore ways to inject friction into techno-social systems to address some of amplification’s potential harms. Regulators might mandate limits on message forwarding or impose “rest stops” in infinite scrolls. Our earlier work analyzes how these approaches comport with the First Amendment. Engineers might be trained to better understand the risks of frictionless design and how, when, and where to inject purposeful friction to address these risks.
Besides algorithmic determinations and human decisions to emphasize one speaker, message, product, or service over another, amplification is often the product of decisions to remove or inject friction in the design of digital interfaces and platforms. Thus, amplification (and prioritization and optimization) should be understood and evaluated in terms of countervailing design decisions regarding different types and degrees of friction. We are interested in optimization and amplification not only for speech and content but also as it increasingly shapes and dictates behavior and conduct.
The Algorithmic Management of Polarization and Violence on Social Media
Jonathan Stray, Ravi Iyer, and Helena Puig Larrauri
Social media platforms are involved in all aspects of social life—including in conflict settings. These platforms are not equipped to make complex judgments about conflicts, but their incidental choices about how they are designed can have profound effects on people within conflict settings. At a minimum, they should not incentivize conflict actors toward more hateful and potentially violence-inducing speech, and they should not enable mass harassment and manipulation. They should provide reasonable affordances for empowering individuals within a conflict setting to keep themselves safe and informed. Evidence suggests these minimum conditions have not been met, though steps have been taken in the right direction. Platforms could be designed to dampen conflict feedback loops, and the resulting destructive escalation to polarization and violence. While content moderation has received considerable attention, it will never affect more than a small amount of objectively policy-violating content and expanding those efforts will only lead to more backtracking, unfair over-enforcement, and controversy. In contrast, every experience of content that is consumed on social media platforms is influenced by the design of the user interface and algorithms of that platform. Platforms designed for business outcomes are not neutral with regard to conflict relevant behavior. In this paper, we will discuss evidence for how platforms and their algorithms are currently affecting polarization and violence. We will then make evidence-based suggestions for reforming platform design and suggest next steps for the many things that remain unknown.
4:50PM – 5:00PM EDT
Visualizing virality
Presenters
Samia Menon
, Columbia University
Sahil Patel
, Columbia University
Saturday April 29
Faculty House, Presidential Room 2, or Online
64 Morningside Dr, New York, NY 10027
9:30AM – 11:00AM EDT
Panel 5: Empirical look at user behavior
Algorithms learn from users’ behavior, and users rely on algorithm-mediated social learning. What is the nature of the resulting feedback loop? How can platforms empower users to make better informed decisions about potential disinformation?  Conversely, what design interfaces can allow users to actively teach platforms their preferences?
Panelists
Jason Burton
, Copenhagen Business School and Max Planck Institute for Human Development
Kevin Feng
, University of Washington
Benjamin Kaiser
, Princeton University
Angela Lai
, New York University
Moderator
Mor Naaman
, Cornell Tech
Algorithmic Amplification for Collective Intelligence
Jason Burton
The algorithmic amplification of online content is often framed as a danger to be mitigated, with the dominant “engagement-based ranking” approach frequently cited as cause for divisiveness and sensationalism in public discourse. In recent proof-of-concept studies, however, we show that algorithmic amplification can be designed to promote collective intelligence. Specifically, these studies show—through agent-based simulations and online multiplayer experiments—how systematic relationships between belief distributions and collective accuracy can be leveraged to algorithmically mediate online interactions and reduce error in collective estimations, even when the ground truth is unknown. In this work-in-progress, I expand on these studies by drawing from the literature on “wisdom of the crowd” effects and argumentation theory to design, deploy, and evaluate algorithms that curate content to support deliberation and improve the accuracy of people’s beliefs. In doing so, this work targets both theoretical and practical implications: First, it provides further experimental evidence reaffirming the position that algorithmic amplification can influence the beliefs people form. Second, our findings aim to inform the design of new civic technologies in which algorithmic amplification plays a key role—for example, by proposing new features of deliberation and online behavior to be mined and amplified. Third, and most broadly, this work contributes to the ongoing conceptual discussion of how the design of recommendation systems and online platforms can be modified to better align with the democratic values that the internet once promised.
Teachable Agents for End-User Empowerment in Personalized Feed Curation
Kevin Feng, David McDonald, and Amy X. Zhang
As a small handful of platforms act as social architects—shaping user norms and behaviors through algorithmically curated feeds and interface affordances alike—the risks of curatorial centralization begin to emerge: Top-down, platformwide policies rob users of their sense of agency, fail to account for nuanced experiences, and overall marginalize users and communities with unique values and customs. Prior work has shown that users have attempted to reclaim their agency by deriving “algorithmic folk theories” to probe black-box feed curation algorithms and “teaching” algorithms to yield more satisfactory content through strategic interactions with their feed. Given this, we ask: How can users’ inherent teaching abilities be more explicitly employed to empower personalized curation and transparent algorithmic customization in online social settings? We draw inspiration from the paradigm of interactive machine teaching and explore user-teachable agents for feed curation. To do this, we first conducted a formative study to understand how users would approach explicitly teaching an algorithmic agent about preferences in their social media feeds, as opposed to the agent implicitly learning them. Based on our findings, we propose in-feed affordances that allow users to execute a teaching loop by 1) explaining content preferences via examples of posts to a learnable agent, 2) evaluating the agent’s effectiveness of learning, and 3) iteratively formulating a curriculum of teaching goals and examples. We conclude with a discussion of challenges and next steps, with an eye towards how our approach may be used to better align incentives of users and platforms in sociotechnical systems.
It’s the Algorithm: A large-scale comparative field study of news quality interventions
Benjamin Kaiser and Jonathan Mayer
There is a widespread belief, and growing anecdotal evidence, that platforms’ recommendation algorithms can contribute significantly to the spread or suppression of misinformation. But work by platforms and researchers to develop interventions to counter the spread of misinformation has overwhelmingly focused on user-facing, informative interventions like fact checks and content labels. There is little rigorous evidence to answer the question of whether algorithmic interventions may be more effective than informative interventions.
We conducted the first study analyzing both informative and algorithmic misinformation interventions deployed in the ordinary functionality of a major online platform. At large scale and across multiple countries, we compared the effects of informative and algorithmic interventions on user engagement with misinformation. We found that an algorithmic deamplification intervention reduced engagement with misinformation by over half, while informative interventions had statistically insignificant effects on engagement.
Based on our findings, we argue that research priorities should shift from informative interventions to algorithmic interventions, that platforms must be more transparent about what content their algorithms amplify and deamplify, and that research collaborations between platforms and academics—not just data sharing initiatives—are essential to learn how to effectively counter misinformation online.
Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube recommends content to real users
Megan Brown, James Bisbee, Angela Lai, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker
To what extent does the YouTube recommendation algorithm push users into echo chambers, ideologically biased content, or rabbit holes? Using a novel method to estimate the ideology of YouTube videos and an original experimental design to isolate the effect of the algorithm from user choice, we demonstrate that the YouTube recommendation algorithm does, in fact, push real users into mild ideological echo chambers where, by the end of the data collection task, liberals and conservatives received different distributions of recommendations from each other, though this difference is small. While we find evidence that this difference increases the longer the user followed the recommendation algorithm, we do not find evidence that many go down rabbit holes that lead them to ideologically extreme content. Finally, we find that YouTube pushes all users, regardless of ideology, towards moderately conservative and an increasingly narrow range of ideological content the longer they follow YouTube's recommendations.
11:00AM – 11:20AM EDT
Break
11:20AM – 12:30PM EDT
Panel 6: Reform part 2
How can platforms go beyond engagement optimization? For example, how can they design recommender systems to bridge political divides? What can we learn from public service media on how to design recommendation engines that reflect cultural values and responsibly curate cultural content?
Panelists
Georgina Born
, University College London
Aviv Ovadya
, Harvard University
Alessandro Piscopo
, BBC Product Group
Moderator
Joe B. Bak Coleman
, Columbia University
A Public Service Media Perspective on the Algorithmic Amplification of Cultural Content
Fernando Diaz and Georgina Born
Streaming entertainment platforms curate cultural content such as music, film, and literature and significantly influence the nature of individual cultural experience. Recommender systems play an important role in this process, basing curatorial decisions on algorithms optimized for objectives such as engagement, retention, and advertising revenue. As a result, multiple studies have demonstrated that some genres or groups of content creators are amplified while others are overlooked. Although these studies describe distortions in the content people consume, they do not provide guidance on what appropriate curation of cultural content might look like. Considering this, we analyze algorithmic amplification specifically in the curation of cultural content, focusing on disparities between engagement and retention as goals of recommender systems and normative concerns about what kinds of algorithmic curation of cultural content can be developed to promote cultural experiences oriented to social justice and the public good. For guidance on such normative concerns, we turn to principles underlying public service media (PSM) systems in democratic societies. These principles, refined over decades in the programming of cultural content, expand the desiderata of recommender systems—both commercial and noncommercial—to include values furthering the democratic well-being and the cultural and social development of contemporary societies. Building on our recent work developing a metric to measure two PSM principles, commonality and diversity, in recommender systems, we propose a more comprehensive research program toward incorporating such principles into the design of recommender systems for cultural content, inviting the workshop to address how normative goals might transform processes of algorithmic amplification. Our proposed paper is a substantial expansion of our published work on public service media principles and the algorithmic curation of cultural goods (e.g., music, film, and literature). We are eager to share this collaboration with the symposium attendees and receive feedback.
Bridging Systems: Open problems for countering destructive divisiveness in ranking, recommenders, and governance
Aviv Ovadya and Luke Thorburn
Divisiveness appears to be increasing in much of the world, leading to concern about political violence and a decreasing capacity to collaboratively address large-scale societal challenges. In this working paper, we aim to articulate an interdisciplinary research and practice area focused on what we call bridging systems: systems which increase mutual understanding and trust across divides, creating space for productive conflict, deliberation, or cooperation. We give examples of bridging systems across three domains: recommender systems on social media, software for conducting civic forums, and human-facilitated group deliberation. We argue that these examples can be more meaningfully understood as processes for
attention-allocation
(as opposed to “content distribution” or “amplification”) and develop a corresponding framework to explore similarities—and opportunities for bridging—across these seemingly disparate domains. We focus particularly on the potential of
bridging-based ranking
to bring the benefits of offline bridging into spaces which are already governed by algorithms. Throughout, we suggest research directions that could improve our capacity to incorporate bridging into a world increasingly mediated by algorithms and artificial intelligence.
Recommenders with Values: Developing recommendation engines in a public service organization
Alessandro Piscopo, Lianne Kerlin, North Kuras, James Fletcher, Calum Wiggins, Anna McGovern, and Megan Stamper
The BBC is the world’s largest public service broadcaster. It reaches every week more than 80 percent of the U.K.’s adult population and 279 million people worldwide. In order to ensure that our audiences get the most engaging experience, our team develops recommender systems which aim to provide users with the most relevant pieces of content among the thousands the BBC publishes every day. All BBC output should serve the organization’s mission to “act in the public interest, serving all audiences through the provision of impartial, high-quality and distinctive output and services which inform, educate, and entertain.” Recommendations make no exception and, since they determine what our audiences see, they are in effect editorial choices at scale. How can we ensure that our recommendations are consistent with our mission and public service values, avoiding some of the harmful effects which might be associated with recommenders? In addressing this question, we identified two main challenges: (i)
methodological challenges
: Public service values are hard to pin down into a specific metric, therefore we have no clearly defined optimization function for our recommenders; (ii)
cultural/operational challenges
: Domain knowledge around public service values sits with our editorial staff, whereas data scientists are the recommendations specialists. We need to create a shared understanding of the problem and a common language to describe objectives and solutions across data science and editorial. Our paper describes the approach we devised to tackle these challenges, presenting a use case from our work on a BBC product, and reporting the lessons learned.
12:30PM – 1:30PM EDT
Lunch
Speakers
Alondra Nelson
Institute for Advanced Study
Joe B. Bak-Coleman
Columbia University
Katy Glenn Bass
Research Director, Knight Institute
Fabian Baumann
Max Planck Institute for Human Development
Luca Belli
University of California, Berkeley
Georgina Born
University College London
William J. Brady
Northwestern University
Jason W. Burton
Copenhagen Business School and the Max Planck Institute for Human Development
Annie Dorsen
Independent Artist
Laura Edelson
New York University
Kevin Feng
University of Washington
Camille François
Columbia University
Brett Frischmann
Villanova University
Tarleton Gillespie
Microsoft Research New England
Ravi Iyer
Psychology of Technology Institute
Jameel Jaffer
Executive Director, Knight Institute
Benjamin Kaiser
Princeton University
Daphne Keller
Stanford University
Angela Lai
New York University
Benjamin Laufer
Cornell Tech
Seth Lazar
Senior AI Advisor 2024-2025; Australian National University
Kristian Lum
University of Chicago
Samia Menon
Columbia University
Smitha Milli
Cornell Tech
Mor Naaman
Cornell Tech
Arvind Narayanan
Princeton University; Knight Institute Visiting Senior Research Scientist 2022-2023
Aviv Ovadya
Harvard University
Sahil Patel
Columbia University
Alessandro Piscopo
BBC Product Group
Inioluwa Deborah Raji
University of California, Berkeley; Mozilla
Yoel Roth
University of California, Berkeley
Institute Update
An Introduction to My Project: Algorithmic amplification and society
By
Arvind Narayanan
November 2, 2022
Institute Update
Knight Institute Symposium on “Algorithmic Amplification” to Feature Leading Scholars and Technologists
February 22, 2023
Filed Under
Event
Tags
Algorithmic Amplification
Big Data
Content Moderation
Social Media Research
Technology Companies
Social Media Regulation
Issues
Featured
AI as Normal Technology
An alternative to the vision of AI as a potential superintelligence
Featured
Appeals Court Revives Journalists’ Case Against Spyware Manufacturer NSO Group
Spyware manufacturers should be held accountable in U.S. courts for actions violating U.S. law, Knight Institute says
Featured
Trump Administration Concedes That U.S. Researchers May Engage With Sanctioned U.N. Official
Response is a victory for the right of Americans to speak with and hear from international experts about issues of global importance, Knight Institute says
Events
National Press Club and Online
Can Middleware Save Social Media?