Thanks to everyone who came out for last night’s sold-out event with Anna Wiener! We can’t wait to meet even more of you at our next Interface Live. Expect details soon!
The institutions that prop up our democracy — that uphold the rule of law — are eroding. That’s one reason why the spread of misinformation, hate speech, and other malicious content on social platforms has felt like such a crisis in the past three years. An enormous chunk of our political discourse takes place on, or is informed by, what we see on Facebook, YouTube, and Twitter. As they’ve grown in size and influence, they’ve all become institutions in their own right. An open question is whether they can have a positive effect in upholding democratic values and the rule of law — or whether they will accelerate the polarization of the electorate until it reaches some awful breaking point.
For that reason, around here we pay attention when the platforms take action to fight disinformation. To be sure, even a perfect information environment doesn’t guarantee a good outcome in a democracy — the case against Trump included multiple smoking guns, and Republican senators simply chose to ignore them — but governance benefits from a shared set of facts. So let’s see what they’re up to.
On Monday, YouTube laid out its policies for handling disinformation. None of the policies are new, but the announcement served as a kind of statement of purpose ahead of the (disastrous
!) Iowa caucus. Here’s Julia Alexander in The Verge
When it comes to manipulated videos, YouTube will remove “content that has been technically manipulated or doctored in a way that misleads users (beyond clips taken out of context) and may pose a serious risk of egregious harm.” The company previously took down a video of House Speaker Nancy Pelosi
that was manipulated to make her appear intoxicated. […]
Videos that also tell people incorrect information about voting, including trying to mislead people by using an incorrect voting date, are also not allowed. Neither are videos that advance “false claims related to the technical eligibility requirements for current political candidates and sitting elected government officials to serve in office.” YouTube will further terminate channels that attempt to impersonate another person or channel, or artificially increase the number of views, likes, and comments on a video.
Tech platforms are generally loath to evaluate claims of truth, particularly those involving politicians, but the big three are all dead set on removing anything that gives the wrong date for an election. Confidence in content moderation! You love to see it.
On Tuesday, Twitter followed suit with some fresh new policies to counter disinformation — specifically, altered photos and videos, or as they are increasingly being called, synthetic media. Beginning in March, Twitter said, it will add labels or outright remove deepfaked tweets. Here’s Davey Alba and Kate Conger in the New York Times
To determine whether a tweet should be removed or labeled, Twitter said in a blog post
, it will apply several tests: Is the media included with a tweet significantly altered or fabricated to mislead? Is it shared in a deceptive manner? In those cases, the tweet will probably get a label.
But if a tweet is “likely to impact public safety or cause serious harm,” it will be taken down. Twitter said it might also show a warning to people before they engaged with a tweet carrying manipulated content, or limit that tweet’s reach.
This is much more difficult than removing tweets that misstate the date of the election. For one thing, it leaves open the question of how Twitter will handle parody and satire. Still, I liked this quote from Yoel Roth, the company’s head of site integrity: “Whether you’re using advanced machine learning tools or just slowing down a video using a 99-cent app on your phone, our focus under this policy is to look at the outcome, not how it was achieved.”
Looking at the outcome is a useful frame for making individual policy decisions. There are lots of terrible pieces of social content that are essentially harmless, because no one sees them. And then there are the small few that go viral and do lots of damage. It makes sense that Twitter would focus its moderation efforts at that level. Promising to intervene in cases where there is serious harm isn’t just sensible — it’s also scalable.
Elsewhere, disinformation researcher Aviv Ovadya has some good suggestions for how tech platforms can respond to the threat of synthetic media. My favorite: they could use their monopoly powers to require app developers to insert watermarks — which could then be easily detected by other tech monopolies. Ovadya writes
A further lever that could make these controls more ubiquitous would be if the Apple and Google app stores required all synthetic media tools to implement them. This would then have those companies impacting creation and partially governing how synthetic media can be created on their platforms. Finally, a company like Facebook also take advantage of the existence of hidden watermarks to treat synthesized content differently, impacting distribution (and governing their own influence; though they may be able to offer some of that governing power to independent bodies, as they do with third party fact checkers).
All of these restrictions are limited in impact — for example, malicious actors might still find tools that don’t have any controls. But with the right incentives, those tools are likely be harder to access and inferior in quality, as they may be more difficult to monetize if they are not available on popular platforms. No mitigation to this challenge is a silver bullet. We need defense-in-depth.
Many of the concerns I have about Big Tech are rooted in the sheer size of the companies, and all the unintended consequences that come with scale. If the big guys want to show off the benefits that come with scale, insisting on an ecosystem that watermarks synthetic media could be an excellent place to start.