A few years ago, Facebook became aware that Russia kept posting misinformation all over the network. The misinformation was designed to rile people up and make them share with their friends, and because people are generally pretty easy to rile up, Russia’s strategy was very successful. Some prominent political scientists believe that the country’s election interference, both on and off Facebook, ushered Donald Trump into office
. And we’ve spent a good portion of the past three and a half years arguing about it.
About a year after the election, Facebook introduced a tool
to let people know if they had unwittingly interacted with the Russian troll army. If you liked the page of a troll in disguise, you could visit an obscure part of Facebook and it would tell you. The tool would not tell you if you had viewed any of the page’s posts
, or even if you had shared them. Alex Hern wrote about this flaw at the time in The Guardian
Facebook will not tell those users about their exposure to misinformation, although the company has not said whether it is unable, or simply unwilling, to provide that information. A source close to the company described it as “challenging” to reliably identify and notify everyone who had been incidentally exposed to foreign propaganda.
Fast-forward to today, when the misinformation we’re worried about primarily has to do with COVID-19. Over the past few weeks, we’ve talked about hoaxes attempting to link the coronavirus to new 5G networks, dangerous fake “cures” based on drinking bleach, and so on. Reporting has consistently found these sorts of articles racking up thousands of shares on Facebook. Even more than Russian misinformation, the COVID-19 hoaxes pose clear public health risks. So what should Facebook do about it?
On Thursday, the company said it would invite people who had shared a hoax to visit a page created by the World Health Organization debunking popular COVID-19 myths. Here’s Guy Rosen, Facebook’s vice president of integrity, in a blog post
We’re going to start showing messages in News Feed to people who have liked, reacted or commented on harmful misinformation about COVID-19 that we have since removed. These messages will connect people to COVID-19 myths debunked by the WHO
including ones we’ve removed from our platform for leading to imminent physical harm. We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook. People will start seeing these messages in the coming weeks.
If you didn’t read the above paragraph closely, you might assume Facebook’s system would work something like this: You share an article that says something like, “huffing macaroni and cheese fumes cures coronavirus,” that article gets debunked by an independent fact checker, and then Facebook links you to the WHO’s page about macaroni and cheese myths. Maybe there would even be a message that said something like, “Just so you know, huffing macaroni and cheese does not cure coronavirus. Click here for more.”
But we have learned that people are hard-headed and do not enjoy being told that they have been duped. There was a famous moment after the 2016 election when Facebook began labeling false posts as “disputed” and discovered that doing so made people share them more
. And so the company has taken a different approach here.
A few weeks from now, people who have shared mac-and-cheese-cured-my-COVID type posts will see a big story in the News Feed. It is not labeled “Hey, you have been duped.” Rather, it says: “Help friends and family avoid false information about COVID-19.” It then invites them to share a link to the WHO’s myth-busting site, as well as a button that will take the user to the site directly.
The goal of this type of approach is to make people less defensive about the fact that they may have been wrong, and try to smuggle some good information into their brains without making them feel dumb about it. The appeal to helping friends and family is also a nice touch. Who doesn’t want to help their friends and family? And Facebook is putting the information directly into the News Feed — no need to visit some arcane help center buried beneath layers of taps.
But this approach also has downsides. If you do want to know if you’ve accidentally shared a lie to all your friends, this tool won’t help you. And the WHO myth-busting page currently debunks 19 different hoaxes — what are the odds you’re going to scroll all the way down to the one you accidentally shared and read it? What about next month, when that list has grown to 40?
This is not a small problem. Avaaz, a human rights group that tracks misinformation closely, published an in-depth report this week
that examined 100 pieces of misinformation, written in six languages, that were shared on Facebook. It found that those posts were shared more than 1.7 million times and seen an estimated 117 million times. (Vice talks to the authors
The authors of the Avaaz report argue that Facebook should inform each person who has viewed coronavirus misinformation about exactly what they got wrong. The group even conducted a test of this system that it says shows something like this can work:
In order to test the effectiveness of corrections, a hyper-realistic visual model of Facebook was designed to mimic the user experience on the platform. Then a representative sample of the American population, consisting of 2,000 anonymous participants, chosen and surveyed independently by YouGov’s Academic, Political, & Public Affairs Research branch, were randomly shown up to 5 pieces of false news that were based on real, independently fact-checked examples of false or misleading content being shared on Facebook.
Through a randomized model, some of the users, after seeing the false news, were shown corrections. Some users saw only the false or misleading content, and some saw neither. Then the surveyed participants answered questions designed to test whether they believed the false news.
Avaad said its study showed that belief in misinformation declined at least 50 percent in study participants.
Rosen told me that calling out these hoaxes with a special message might give them more visibility than they originally had, amplifying the misinformation. Maybe you scrolled by a piece of misinformation without internalizing its contents; if Facebook puts a big red box in the News Feed that says “by the way, this is false,” the effects could be counterproductive.
Still, he said, Facebook is testing the use of language that more explicitly says that a person is seeing the WHO messages because they saw misinformation. The goal is to provide the most effective messaging possible, he said.
One possibility I see is to offer different interventions based on whether someone simply saw a hoax, or actively commented on or shared it. People who share hoaxes arguably deserve a stronger response than someone who simply saw something — or maybe even just thumbed past it — in their feed.
Compared to its early work on the Russian troll problem, Facebook has taken a refreshingly interventionist approach to stopping the spread of COVID-19 misinformation. But it also remains unclear which of those interventions actually work. Given the risks to public health, here’s hoping that Facebook learns quickly.