View profile

🆕 Facebook is very pro thresholds

Everything in Moderation
🆕 Facebook is very pro thresholds
Hello everyone. I spent the start of the week in the eye of Storm Dennis next to this full-to-bursting river and I think the experience helped me prepare for the onslaught of Facebook-inspired content moderation news this week.
Thank you to new subscribers from Ofcom, Yiibu, Technology Review and, erm, my mum (no idea what took her so long…). I’m now over the 200 subscriber mark 🎉— do hit reply and say hi or send me a celebratory note. 
Thanks for reading, 
Ben

📃 A 22-page-long line in the sand
It’s a time-old policy discussion trick: ahead of a meeting with a senior EU commission representative, produce a white paper which sets out your stall and then put your CEO up on stage at a conference. 
That’s what Facebook did earlier this week as it published a 22-page white paper, written by VP Content Policy Monika Bickert, that outlined four key questions about what a regulation framework may look like. Meanwhile, over in Munich, Mark Zuckerberg was in front of an audience at the Munich Security Conference explaining his preference for something ‘between a publisher and a telco’. He also penned a punchy piece in this FT.
One thing stood out about the paper and the surrounding noise: Facebook’s admission that it does not believe perfect content moderation is possible (see EiM #50). On page 7, it states:
Given the dynamic nature and scale of online speech, the limits of enforcement technology, and the different expectations that people have over their privacy and their experiences online, internet companies’ enforcement of content standards will always be imperfect.
This conciliatory tone feels new. In the recent past, here’s been little mention of Facebook failing to deal with the challenge of moderation at scale, only that it is ongoing. There was no allusion to imperfection in any of the Oversight Board documentation (EiM #44) and it certainly didn’t come in Zuckerberg’s Georgetown speech in October 2019, which is still the most comprehensive account of his views on free speech that we have. Imperfect marks a definite change.
Bickert at SXSW in 2017 (Flickr / nrkbeta / CC BY-SA)
Bickert at SXSW in 2017 (Flickr / nrkbeta / CC BY-SA)
Why does it matter? Admitting that a flawless system of moderation is not possible allows for Facebook to argue for ‘thresholds’ (which appears five times in Bickert’s paper) and opens the door to ‘performance targets’ (six mentions). Being below an agreed line on, say, the prevalence of hate speech, is, according to Facebook, the best both they and the EU Commission can hope for. Eradicating it completely is not possible and shouldn’t be sought, according to Bickert. Any regulation should be about being good enough, not about being great.
Sadly for Zuck et al, that’s not going to wash — Thierry Breton, the French commissioner called the proposals ’too slow.. too low in terms of responsibility and regulation’. But nevertheless, admitting fallibility seems to be Facebook’s latest tactic to secure the least bad form of regulation in Europe.
🎤 DoJ on the mic
Back on Zuckerberg’s home soil, the US Department of Justice held a workshop on Section 230 with legal and policy experts ominously entitled ’Nurturing innovation and fostering unaccountability?’ featuring the US Attorney General and FBI director. Axios has a good read through about what it could mean in the long run and how it had Donald Trump’s chubby fingerprints all over it.
And if you’re at a loose end this evening, you might decide to watch all four hours of the workshop here on YouTube. Or you know, maybe not. 
🚩 The UK presses on with regulation
After last week’s EiM (#51) touched on the outcomes of the UK’s Online Harms white paper consultation, I was struck by this tweet from the Guardian’s media correspondent. 
Jim Waterson
The lengthy campaign by some print newspaper groups for stricter controls on Facebook/Google is now turning into a bit of a shitshow. Everyone’s realising it’s hard to create a “legal but harmful” category for websites that doesn’t drag in news stories/comment sections. https://t.co/QsRdOuc3YV
It’s a reminder that, when you call for systems and frameworks and regulation, organisations get caught up in them that perhaps you didn’t expect to. 
⏰ Not forgetting...
The 100th edition of the Social Media and Politics podcast features Dr Tarleton Gillespie, author of Custodians of the Internet. Listen here.
Content Moderation and the Politics of Social Media Platforms, with Dr. Tarleton Gillespie
Twitter will test what it is calling ‘community labelling’, a way for users to identify misleading information by public figures. It comes in soon — March 5th according to Reuters — and might involve badges/points.
Twitter tests labels, community moderation for lies by public figures - Reuters
Remember Alison Parker, the US news anchor that was shot and killed while doing a live interview for her TV channel back in 2015? Her father cannot get YouTube to remove the videos of her death.
YouTube refuses father's request to remove video of daughter's killing
Kickstarter’s new union, the first of its kind at a tech company, will have a say in the way content is moderated, as well as company pay and benefits. Amazing stuff. Well done to them.
Everything in Moderation is a weekly newsletter about content moderation on the web and the policies, people and platforms that make it happen.
If you like what you’re reading and want to help me cover the costs of Everything in Moderation (£7 a month for Revue plus my time), why not get me a Ko-Fi :) (16 and counting). Thanks! - @benwhitelaw
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom