If you see an ad on Facebook, should the contents of that ad be true? Historically, the answer has been yes. The company’s
posted advertising guidelines prohibit “misinformation,” defined here as “ads that include claims debunked by third-party fact checkers or, in certain circumstances, claims debunked by organizations with particular expertise.”
But that policy comes with an asterisk. As Judd Legum reported this week in his newsletter,
Popular Information, Facebook
is now exempting political figures from this policy. If a political candidate or party wants to run a Facebook ad announcing that their rival is a lizard person, they now have an open lane to do so.
Legum has already found several examples of the Trump campaign appearing to lie in its Facebook ads:
A
false ad targeting seniors that claimed Trump was still considering closing the southern border “next week” when he had already publicly announced he would not close the border for at least a year.
An ad
scamming its supporters by claiming there was a midnight deadline to enter a contest to win the “1,000,000th red MAGA hat signed by President Trump.” The ad was run every day for weeks.
An ad that
falsely claimed Democrats are trying to repeal the Second Amendment.
So how should we think about this change in Facebook’s policy? Is this a crippling blow to the company’s efforts to prevent abuses on the platform? A principled stand for the freedom of speech? A pragmatic decision intended to avoid conflict with the company’s most dangerous regulators?
To some degree, it’s all of these things. But it’s also probably the right call.
Not that it has been generally received that way. News of Facebook’s policy change led to much fulminating over the past week. “Social media platforms have a responsibility to protect our democracy and counter disinformation online,” Seema Nanda, CEO of the Democratic National Committee,
told CNN. “This is a serious missed opportunity by Facebook.”
“There’s no indication that Zuckerberg or Facebook executives have come to terms with the role their unpreparedness played in that successful attack, nor have they shown that they understand what needs to be done to prevent another attack in the 2020 election,” Warren said. “In fact, this time they’re going further by taking deliberate steps to help one candidate intentionally mislead the American people, while painting the candidacy of others (specifically: mine) as an “existential” threat. This is a serious concern for our democratic process.”
Certainly it would be nice if politicians stuck to the truth in their campaign advertising. And we live in a nation that has
truth-in-advertising laws, enforced by the Federal Trade Commission. But like Facebook, the FTC also declines to weigh in on the truth of political advertising. And in a case earlier this decade where a state attempted to mandate truth in political advertising, the law was struck down by a federal judge.
that declared it illegal to publish or broadcast “a false statement concerning the voting record” of a candidate. The law also gives the power to decide truth and falsehood to the state elections commission.
Then an anti-abortion group tried to put up billboards accusing a congressman of voting for abortion funding because he had voted for the Affordable Care Act. The congressman protested, arguing that Obamacare did not fund abortions.
But federal District Court Judge Timothy S. Black struck down the law:
“We do not want the government (i.e., the Ohio Elections Commission) deciding what is political truth — for fear that the government might persecute those who criticize it,” Judge
Black wrote in his opinion. “Instead, in a democracy, the voters should decide.”
Facebook’s decision not to determine the merits of political speech in advertising seems to me to come from the same sensible place. If you don’t want the state making calls on political speech, you probably don’t want a quasi-state with 2.1 billion daily users making calls on political speech, either.
On one hand, I get why people are angry. Viral misinformation remains a significant and disturbing problem. And so when Facebook shrugs off any responsibility for evaluating the content of political advertising, it can look like cowardice. Particularly when the company continues to face bipartisan criticism that its content moderation decisions are “biased” — your decisions can’t be biased if you refuse to make them in the first place. Problem solved!
And yet it strikes me that some of the same people mad at Facebook for failing to police the claims in political ads are the same people complaining that the company is too big, too powerful, and lacks any real accountability to the public or its shareholders. To worry about Facebook’s vast size and influence — and I do! — while also demanding that it referee political speech seems like an odd contradiction.
Facebook’s approach to this problem has been to make political ads public so that researchers, journalists like Legum, and curious citizens can investigate the content of those ads themselves — and then have a free debate over their merits on and off the platform. It’s not a perfect solution, but it is a democratic one.
Hovering around this debate is a larger, unspoken concern about our current moment, which is that there is increasingly little penalty in public life for telling any lie at all. Pressing as that issue is, though, it’s unclear what a tech platform ought to do about it.