You hate to accuse our big tech platforms of being responsible during a crisis. For one thing, they benefit from low expectations, having
historically ignored much of the
misinformation that they unwittingly
promoted with their recommendation algorithms. And for another, the American part of the
COVID-19 crisis is likely just beginning: each day brings with it a fresh raft of headlines about new diagnoses, new conference cancelations, new restrictions on employee travel, and so on. With the novel coronavirus, as with so much else, it appears that things really are going to get worse before they get better.
Still, as I look at the past few days of news, I can’t help but notice hopeful signs. The big tech companies and social platforms are taking meaningful action to direct people to timely, accurate information about the virus. And some of those steps they’re even taking proactively. Here are a few.
More than a week ago, Facebook began inserting a box into the news feed directing users to the Centers for Disease Control’s page about COVID-19. Minor though this may seem, it represents a meaningful departure from the company’s usual approach to putting things in the News Feed. The essence of the feed, after all, is personalization — Facebook wants to show you only things it has some reason to believe that you will care about, whether it’s because you are friends with a person or liked a page. With the virus box, Facebook put a rare algorithmic thumb on the scale, presumably driving many millions of users to reliable, vetted information from an authoritative source.
On Tuesday night, the company
took further steps to address the virus’ spread. In
a Facebook post, CEO Mark Zuckerberg said the company would grant unlimited free ad credits to the World Health Organization to promote accurate information about the crisis. The company will also remove “false claims and conspiracy theories that have been flagged by leading global health organizations,” and will block people from running ads that “try to exploit the situation,” such as by falsely advertising a cure. These are also good steps — although, as ever, policy is what you actually enforce. We’ll see!
Twitter has implemented similar measures,
the company said Wednesday. Searching for COVID-19 will take you to a page featuring recent stories from public health organizations and credible mainstream news sources. The search also accounts for common misspellings, the company said.
Twitter also said that while it had not yet seen Russian-style efforts to sow discord via large-scale information operations, it would take a “zero-tolerance approach to platform manipulation and any other attempts to abuse our service at this critical juncture.” Easier said than done, of course, but it’s clear that the problem has the company’s attention. It’s also giving away ad credits to public health organizations and other nonprofits.
Google
announced this morning that it would be rolling out free access to “advanced” features for Hangouts Meet to all G Suite and G Suite for Education customers globally through July 1st. That means organizations can host meetings with up to 250 participants, live stream to up to 100,000 viewers within a single domain, and record and save meetings to Google Drive. Normally, Google
charges $13 extra per user per month for these features in addition to G Suite access under its “enterprise” tier, which adds up to a total of $25 per user per month.
There’s obviously an element of self-interest in this. Tech companies give away their products for free during times of crisis for the same reason that newspapers lower their paywalls: it’s good for attracting new paying customers. But it’s also a good and helpful and pro-social thing to do, and I suspect many organizations will find it useful.
That’s not to say that misinformation isn’t spreading on tech platforms — just as it’s spreading on the larger internet, and among friends and family in conversation. If there’s a platform that seems to be under-performing in the current crisis, it’s Facebook-owned
WhatsApp, where
the Washington Post found “a flood of misinformation” in countries including Nigeria, Singapore, Brazil, Pakistan, Ireland. Given the encrypted nature of the app, it’s difficult to quantify the scale of the issue. (The
Post doesn’t really offer a guess.) Misinformation is frequently shared in WhatsApp groups, where membership is limited to 250 people. Information in one group can be easily to another, but there’s a meaningful amount of friction in spinning up multiple groups to peddle phony miracle cures or spread malicious rumors.
Still, people are doing it. It’s a price we pay for having tools that enable conversations that the government can’t listen in on. My hope is that companies building encryption do so in a way that minimizes the harm from the hoaxes that messaging apps will invariably contain. But that’s far from a given.
Many of the measures described above are relatively minor in the scheme of things. Ultimately, the responsibility to coordinate the response to the spread of the virus belongs to the US government. Still, it’s worth noting that after a years-long pressure campaign from academics, journalists, and elected officials, tech platforms are beginning to accept responsibility for the material they host. Not in a legal sense —
Section 230 is still the law of the land — but in a moral sense.
That’s progress, and I’ll take it.