I.
“Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.” […]
Fixing the polarization problem would be difficult, requiring Facebook to rethink some of its core products. Most notably, the project forced Facebook to consider how it prioritized “user engagement”—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.
The first thing to say is that “polarization” can mean a lot of things, and that can make the discussion about Facebook’s contribution to the problem difficult. You can use it in a narrow sense to talk about the way that a news feed full of partisan sentiment could divide the country. But you could also use it as an umbrella term to talk about initiatives related to what Facebook and other social networks have lately taken to calling “platform integrity” — removing hate speech, for example, or labeling misinformation.
The second thing to say about “polarization” is that while it has a lot of negative effects, it’s worth thinking about what your proposed alternative to it would be. Is it national unity? One-party rule? Or just everyone being more polite to one another? The question gets at the challenge of “fighting” polarization if you’re a tech company CEO: even if you see it as an enemy, it’s not clear what metric you would rally your company around to deal with it.
Anyway, Facebook reacted to the
Journal report with significant frustration. Guy Rosen, who oversees these efforts,
published a blog post on Wednesday laying out some of the steps the company has taken since 2016 to fight “polarization” — here used in that umbrella-term sense of the word. The steps include shifting the News Feed to include more posts from friends and family than publishers; starting a fact-checking program; more rapidly detecting hate speech and other malicious content using machine-learning systems and an expanded content moderation workforce; and removing groups that violate Facebook policies from algorithmic recommendations.
Rosen writes:
We’ve taken a number of important steps to reduce the amount of content that could drive polarization on our platform, sometimes at the expense of revenues. This job won’t ever be complete because at the end of the day, online discourse is an extension of society and ours is highly polarized. But it is our job to reduce polarization’s impact on how people experience our products. We are committed to doing just that.
Among the reasons the company was frustrated with the story, according to an internal Workplace post I saw, is that Facebook had spent “several months” talking with the Journal reporters about their findings. The company gave them a variety of executives to speak with on and off the record, including Joel Kaplan, its vice president of global public policy, who often pops up in stories like this to complain that some action might disproportionately hurt conservatives.
In any case, there are two things I think are worth mentioning about this story and Facebook’s response to it. One is an internal tension in the way Facebook thinks about polarization. And the other is my worry that asking Facebook to solve for divisiveness could distract from the related but distinct issues around the viral promotion of conspiracies, misinformation, and hate speech.
First, that internal tension. On one hand, the initiatives Rosen describes to fight polarization are all real. Facebook has invested significantly in platform integrity over the past several years. And, as some Facebook employees told me yesterday, there are good reasons not to implement every suggestion a team brings you. Some might be less effective than other efforts that were implemented, for example, or they might have unintended negative consequences. Clearly some employees on the team feel like most of their ideas weren’t used, or were watered down, including employees I’ve spoken with myself over the years. But that’s true of a lot of teams at a lot of companies, and it doesn’t mean that all their efforts were for nought.
On the other hand, Facebook executives largely reject the idea that the platform is polarizing in the tearing-the-country-apart sense of the word. The C-suite read closely
a working paper that my colleague Ezra Klein
wrote about earlier this year that casts doubt on social networks’ contribution to the problem. The paper by Levi Boxell, Matthew Gentzkow, and Jesse Shapiro studies what is known as “affective polarization,” which Klein defines as “the difference between how warmly people view the political party they favor and the political party they oppose.” They found that affective polarization had increased faster in the United States than anywhere else — but that in several large, modernized nations with high internet usage, polarization was actually decreasing. Klein wrote:
One theory this lets us reject is that polarization is a byproduct of internet penetration or digital media usage. Internet usage has risen fastest in countries with falling polarization, and much of the run-up in US polarization predates digital media and is concentrated among older populations with more analogue news habits.
So here you have a case where Facebook can be “right” in a platform integrity sense — look at all these anti-polarization initiatives! — while the Journal is right in a larger one: Facebook has been designed as a place for open discussion, and human nature ensures that those discussions will often be heated and polarizing, and the company has chosen to take a relatively light touch in managing the debates. And it does so because executives think the world benefits from raucous, few-holds-barred discussions, and because they aren’t persuaded that those discussions are tearing countries apart.
Where Facebook can’t wriggle off the hook, I think, is in the
Journal’s revelation of just how important its algorithmic choices have been in the spread of polarizing speech. Again, here the problem isn’t “polarization” in the abstract — but in concrete harms related to anti-science, conspiracy, and hate groups, which grow using Facebook’s tools. The company
often suggests that its embrace of free speech has created a neutral platform, when in fact its design choices often reward division with greater distribution.
This is the part of the Journal’s report that I found most compelling:
The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
Facebook says that extremist groups are no longer recommended. But just today, the disinformation researcher Nina Jankowicz
joined an “alternative health” group on Facebook and immediately saw recommendations that she join other groups related to white supremacy, anti-vaccine activism, and QAnon.
Ultimately, despite its efforts so far, Facebook
continues to unwittingly recruit followers for bad actors, who use it to spread hate speech and misinformation detrimental to the public health. The good news is that the company has teams working on those problems, and surely will develop new solutions over time. The question raised by the
Journal is, when that happens, how closely their bosses will listen to them.
II.
I could spend a lot of time here speculating about the coming battle between social networks and the Republican establishment, with Silicon Valley’s struggling efforts to moderate their unwieldy platforms going head-to-head with Republicans’ bad-faith attempts to portray them as politically biased. But the past few years have taught us that while Congress is happy to kick and scream about the failures of tech platforms, it remains loath to actually regulate them.
The president has never followed through on his threats and used his considerable powers to place legal limits on how these companies operate. His fights with the tech companies last just long enough to generate headlines, but flame out before they can make a meaningful policy impact. And despite the wave of conservative anger currently raining down on Twitter, there’s no reason to think this one will be any different.
Those flameouts are most tangible in the courts. On the same day as Trump’s tweets, the US Court of Appeals in Washington ruled against the nonprofit group Freedom Watch and fringe right figure Laura Loomer in a case purporting that Facebook, Google, and Twitter conspired to suppress conservative content online,
according to Bloomberg. Whether it be Loomer or
Rep. Tulsi Gabbard (D-HI) fighting the bias battle, the courts have yet to rule in their favor.
In fact, as former Twitter spokesman Nu Wexler noted, Trump has even less leverage over Twitter than he does over other tech companies. “Twitter don’t sell political ads, they’re not big enough for an antitrust threat, and he’s clearly hooked on the platform,”
Wexler tweeted. And whatever Trump may think, as the law professor Kate Klonick
noted, “The First Amendment protects Twitter from Trump. The First Amendment doesn’t protect Trump from Twitter.”
Facts and logic aside, get ready: you’re about to hear a lot more cries from people complaining that they have been censored by Twitter. And it will be all over Twitter.