|
|
October 14 · Issue #399 · View online |
|
Welcome new subscribers! Today we celebrate The Interface’s second birthday. For two years now, we’ve done our best to bring you the day’s most important developments in technology and democracy. The next year, which will bring us a US presidential election, promises to be the most consequential yet. Thank you for reading it and sharing with your friends and coworkers. And if you’re in San Francisco and would like to meet us in person, there are still some tickets available for our first-ever live event on Tuesday, October 22nd: my conversation with the brilliant disinformation researcher Renee DiResta. I hope to see you there!
Last week, Facebook said it had changed its advertising policies to exempt politicians and political parties from rules banning misinformation. As a result, candidates are now free to lie in their ads, and some of them are already doing so, and I bet you can guess who one of them is!
- There is a long tradition of lying in American politics, much of which has taken place in advertising. See, for example, the history of direct mail campaigns, or of robocalls.
- Lying is bad, but it’s good to know which politicians are liars.
- A robust (if struggling!) media apparatus aggressively documents and describes these lies as part of its campaign coverage.
- The debate about candidates’ positions and their relative truthfulness is an important part of the campaign and of a healthy democracy.
- On balance, I would rather have these discussions taking place in public than deputize a for-profit corporation to preempt them.
I regret to say that my rationale satisfied no one, 24 people unsubscribed to this newsletter, and the debate continued raging into the weekend. In a series of tweets on Saturday, Ms. Warren, a senator from Massachusetts, said she had deliberately made an ad with lies because Facebook had previously allowed politicians to place ads with false claims. “We decided to see just how far it goes,” Ms. Warren wrote, calling Facebook a “disinformation-for-profit machine” and adding that Mr. Zuckerberg should be held accountable. Ms. Warren’s actions follow a brouhaha over Facebook and political ads in recent weeks. Mr. Trump’s campaign recently bought ads across social media that accused another Democratic presidential candidate, Joseph R. Biden Jr., of corruption in Ukraine. That ad, viewed more than five million times on Facebook, falsely said that Mr. Biden offered $1 billion to Ukrainian officials to remove a prosecutor who was overseeing an investigation of a company associated with Mr. Biden’s son Hunter Biden. And then Warren asked what I thought was a pretty good question. She tweeted: “You’re making my point here. It’s up to you whether you take money to promote lies. You can be in the disinformation-for-profit business, or you can hold yourself to some standards. In fact, those standards were in your policy. Why the change?” I continue to think Facebook can make a good business case for accepting political ads with misinformation. And I think there’s a case that our politics are better when candidates have a wide latitude to speak freely, without intervention from private businesses. At the same time, though — and the events of the past few days have driven this home for me — there might not be much of a moral case for Facebook’s policy here. Here’s why. One, if Facebook accepts that politicians will lie in their ads on the site, then the company also has to accept that it will be a partner in spreading misinformation. (This is not a theoretical worry; the Trump-Biden ad was viewed more than 5 million times.) Given how much Facebook has invested in what it calls “platform integrity” — a coordinated effort to rid the site of misinformation — this policy is counterproductive and (for those who work on platform integrity) demoralizing. Two, the platform has historically incentivized inflammatory speech, and permitting them in ads could mean that Facebook once again plays a key role in the outcome of the 2020 election. Charlie Warzel argues in the New York Times that given the Trump campaign’s propensity for telling outrageous lies, Facebook’s policy is a de facto thumb on the scale for Republicans. This is notable for lots of reasons, starting with the fact that the stated intent of the policy is to ensure that Facebook has less influence over political outcomes. Three — and what Warren noted so sharply — is that Facebook’s policy puts it in the uncomfortable position of profiting from politicians’ lies. It doesn’t matter that political ads make up less than 5 percent of the company’s revenues — it’s now the kind of inconvenient truth that Facebook can expect to take a public-relations hit over it every time a politician’s lie goes viral. Finally, Josh Constine added a fourth dimension to consider here, which is that Facebook’s sophisticated ad targeting capabilities could make an untruthful political ad even more pernicious there than, say, in a broadcast TV ad. Reach the right low-information voter with the right lie at scale, the argument goes, and you just might tip the country into full-blown idiocracy. I find the collective arguments in this case … persuasive? I still would far rather citizens sort fact from fiction on their own, using the information that they gather from a free press. But I acknowledge that, for the most part, they don’t. Time after time on tech platforms, we have seen how a posture of neutrality winds up benefiting the worst actors at the expense of everyone else. And there’s a real risk of that happening again here. In the meantime, Facebook’s effort to avoid one trap has landed it another. It may have sidestepped lots of tricky questions about what is true and what is false in the political arena. But there are few ways in which we demonstrate our values more clearly than in what we will accept money to do. Facebook has now has opened itself up to the legitimate criticism that it is spreading misinformation for profit. And with each new viral lie, I expect that criticism will only grow louder.
|
|
|
Today in news that could affect public perception of the big tech platforms.
|
|
In early 2018 as development on Apple’s slate of exclusive Apple TV+ programming was underway, the company’s leadership gave guidance to the creators of some of those shows to avoid portraying China in a poor light, BuzzFeed News has learned. Sources in position to know said the instruction was communicated by Eddy Cue, Apple’s SVP of internet software and services, and Morgan Wandell, its head of international content development. It was part of Apple’s ongoing efforts to remain in China’s good graces after a 2016 incident in which Beijing shut down Apple’s iBooks Store and iTunes Movies six months after they debuted in the country.
The Chinese government built a back door into a propaganda app, allowing it to see users’ messages and photos, browse their contacts and Internet history, and activate an audio recorder inside the devices. The app is reportedly the most widely downloaded app in China, with more than 100 million users. (Anna Fifield / The Washington Post)
GitHub CEO Nat Friedman had a tense meeting with employees after a leaked email revealed the company is renewing its contract with Immigration and Customs Enforcement. The news made some employees question how GitHub plans to interact with non-democratic countries like China. Friedman said the company’s position on China is “evolving.” (Colin Lecher / The Verge)
|
|
It’s a feature that Pinterest expects will reduce complaints and raise satisfaction among a small subset of power users. But it will do little to help the site expand, and could even reduce engagement for those who use it by limiting the information available to the algorithm. It’s the kind of trade-off the company says it’s willing to make, especially since early tests showed no significant drop-off in user activity. Other trade-offs are proving trickier, however, like how to understand users deeply enough to keep them coming back for more, without boring them, boxing them in, or creeping them out. “Users don’t want to be pigeonholed,” says Candice Morgan, the company’s head of inclusion and diversity. She commissioned a study earlier this year to understand how Pinterest could better serve users from backgrounds that the platform underrepresents. “They don’t want us to guess what they’re going to like based on their demography,” she adds. Here’s a sharp PewDiePie profile that examines with the YouTuber’s rise to fame, why he doesn’t consider himself a white nationalist (or even a conservative), and what happened to the $50,000 he pledged to give the Anti-Defamation League and then retracted. See also its useful description of the insular culture of “inner YouTube.” (Kevin Roose / The New York Times)
|
|
Here’s your hashtag of year.
|
|
|
|
Did you enjoy this issue?
|
|
|
|
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|
|
|
|