View profile

Facebook's product teams try to regulate themselves

Earlier this month, Facebook undertook an effort to recast the debate around the regulation of big te
April 11 · Issue #314 · View online
The Interface
Earlier this month, Facebook undertook an effort to recast the debate around the regulation of big tech companies on its own terms. Mark Zuckerberg wrote an op-ed; Sheryl Sandberg published a blog post; and their deputies gave interviews to outlets read closely by policymakers. The overall effect was of a company that has spent the past two years on the defensive organizing around a core set of principles to advocate for: principles that will allow the company to continue operating basically as is.
This week, we saw the second plank of Facebook’s strategy: self-regulation from its product teams. In a meeting with reporters in Menlo Park, myself included, the company announced a series of product updates organized around what the company calls “integrity.” The announcements touched most of Facebook’s biggest products: the News Feed, groups, stories, Messenger, and Instagram. (WhatsApp was a notable exception.) Collectively, the moves seek to strike a better balance between freedom of speech and the harms that come with it. And also, of course, to signal to lawmakers that the company is capable of regulating itself effectively.
Facebook says its strategy for problematic content has three parts: removing it, reducing it, and informing people about the actions that it’s taking. Its most interesting announcements on Wednesday were around reducing: moves that limit the viral promotion of some of the worst stuff on the platform.
“Click gap,” for example, is a new signal that attempts to identify sites that are popular on Facebook but not the rest of the web — a sign that they may be gaming the system somehow. Sites with a click gap will be ranked much lower in the News Feed. As Emily Dreyfuss and Issie Lapowsky describe it in Wired:
Click-Gap could be bad news for fringe sites that optimize their content to go viral on Facebook. Some of the most popular stories on Facebook come not from mainstream sites that also get lots of traffic from search or directly, but rather from small domains specifically designed to appeal to Facebook’s algorithms.
Experts like Jonathan Albright, director of the Digital Forensics Initiative at Columbia University’s Tow Center for Digital Journalism, have mapped out how social networks, including Facebook and YouTube, acted as amplification services for websites that would otherwise receive little attention online, allowing them to spread propaganda during the 2016 election.
Another move aimed at reducing harm on Facebook involves cracking down on groups that become hubs for misinformation. As Jake Kastrenakes writes in The Verge:
Groups that “repeatedly share misinformation” will now be distributed to fewer people in the News Feed. That’s an important change, as it was frequently group pages that were used to distribute propaganda and misinformation around the 2016 US elections.
Facebook will also soon give moderators a better view of the bad posts in their groups. “In the coming weeks,” it said, it will introduce a feature called Group Quality which collects all of the flagged and removed posts in a group in one place for moderators to look at. It will also have a section for false news, Facebook said, and the company plans to take into account moderator actions on these posts when determining whether to remove a group.
I like these moves: they take away “freedom of reach” from anti-vaccine zealots and other folks looking to cultivate troll armies by hijacking Facebook’s viral machinery. There are a lot of other common-sense changes in yesterday’s fine print: allowing moderators to turn posting permissions on and off for individual group members, for example; and bringing Facebook verified badges to Messenger, which should cut down on the number of fake Mark Zuckerbergs scamming poor rubes out of their money.
Still, I can’t shake the feeling that all these moves are a bit … incremental. They’re fine, so far as they go. But how will we know that they’re working? What does “working” even mean in this context?
As Facebook has worked to right its ship since 2016, it has frequently fallen back on the line that while it’s “making progress,” it “still has a long way to go.” You can accept these statements as being true and still wonder what they mean in practice. When it comes to reducing the growth of anti-vaccine groups, for example, or groups that harass the survivors of the Sandy Hook shooting, how much more “progress” is needed? How far along are we? What is the goal line we’re expecting Facebook and the other tech platforms to move past?
Elsewhere, Mark Bergen and Lucas Shaw report that YouTube is wrangling with a similar set of questions. Would the company’s own problems with promoting harmful videos diminish if it focused on a different set of metrics? YouTube is actively exploring the idea:
The Google division introduced two new internal metrics in the past two years for gauging how well videos are performing, according to people familiar with the company’s plans. One tracks the total time people spend on YouTube, including comments they post and read (not just the clips they watch). The other is a measurement called “quality watch time,” a squishier statistic with a noble goal: To spot content that achieves something more constructive than just keeping users glued to their phones. 
The changes are supposed to reward videos that are more palatable to advertisers and the broader public, and help YouTube ward off criticism that its service is addictive and socially corrosive
But two years on, it’s unclear that new metrics have been of much help in that regard. When platforms reach planetary scale, individual changes like these have a limited effect. And as long as Facebook and YouTube struggle to articulate the destination they’re aiming for, there’s continuing reason to doubt that they’ll get there.

Apple Faces Dutch Antitrust Probe Over Favoring Its Own Apps hit with hundreds of false terrorist content notices from EU
A Day After Facebook Banned Canadian White Nationalists, Some Found Their Way Back
Jeff Bezos to meet with federal prosecutors on extortion and hacking claims
More than 3,500 Amazon Employees Urge Bold Action on Climate Change
Why does Twitter link The New York Times to the phrase ‘Enemy of the People’?
P&G Is Putting Ad Platforms Like Facebook and Google on Notice
Is Anyone Listening to You on Alexa? A Global Team Reviews Audio
The next generation of photo booths have their sights set on you
Texting Is Out. Spontaneous FaceTime Is In.
Introducing LinkedIn Reactions: More Ways to Express Yourself
Twitter shakes up its experimental twttr app with new swipe gestures for engaging with tweets
The Privacy Project
The New KFC Colonel Is a Computer-Generated Instagram Influencer - Eater
Bob Iger, Disney CEO, slams 'vile' public discourse: 'Hitler would have loved social media'
Why Won’t Twitter Help Us Delete Our Tweets?
And finally ...
The New KFC Colonel Is a Computer-Generated Instagram Influencer
Talk to me
Send me tips, comments, questions, and your incremental Facebook improvements:
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue