View profile

🆕 Public vs non-public (and why it's important)

Everything in Moderation
🆕 Public vs non-public (and why it's important)
Hello everyone and happy Friday from rainy London. 
If you’re interested in exploring the theme of the last EiM (#43 , Human rights, applied online), check out this new episode of the Lawfare podcast with David Kaye, UN special rapporteur, on policing free speech online.
Estimated reading time of this week’s edition is 3 minutes, 33 seconds and contains 711 words (is this useful? Maybe? Let me know).
Thanks for reading, 
Ben

What does it mean to be public online?
When I worked at a UK national newspaper leading a team of comment moderators, one of the regular complaints from banned commenters was the age-old ‘being censored’. They would claim their right to free speech was being violated and the team would have to explain that the site (which you had to pay to access) had clear terms of reference which allowed us to moderate comments against our community guidelines (essentially, don’t be offensive).
There were occasions when the team and I allowed controversial posts to stay up because they were being rationally debated. Our (perhaps misplaced) thinking was as follows: if you can’t attempt to have a discussion about these topics on a newspaper website (even if they are behind a paywall), then where can you? 
For that reason, I’m very interested in the public/non-public debate that sits behind the question of how we regulate speech online. What does it mean to be ‘public forum’? Is enough to be published on the web? Is it about the number of users on the platform? How do users perceptions of that forum play into it?
Facebook's view of itself
Facebook's view of itself
In a recent article on the Lawfare blog, Jeb Rubenfeld, a law professor at Yale, sets out the challenges of viewing the likes of Facebook and Google as ‘public forums’ as well as the implications of them being deemed nonpublic. Go down one road and the mega-platforms become ‘end up as new Silk Roads on a scale never seen before.’ Down the other, there are hate speech policies applied inconsistently, arbitrarily and, in some cases, disproportionately. 
All of this goes some way to explain Mark Zuckerberg’s main thrust in his much-reported speech at Georgetown University back in October (full text) in which he was at pains to reiterate that Facebook was about ‘regular people having more of a voice’. Most commentators saw that as further support for the idea of Facebook as a public town square (see this 2018 New Yorker piece for more) and, subsequently, for reduced moderation liabilities. In Zuck’s defence, Jack Dorsey made the same claim of Twitter in January this year. 
The good news is that smart folk are working on finding an answer to this public vs non-public debate. Eli Pariser, who co-founded Upworthy and Avaaz, is working with Professor Talia Shroud on Civic Signals, a new non-profit set up to help create new rules for digital spaces by taking inspiration from town planning and city-building (ideas that are referenced in Jeb’s piece). 
It’s an odd debate in many ways — US centric, legalistic for the most part and detached from the day-to-day discussion of content moderation — but the simple question of 'what is public?’ may hold the answer to a lot more than we realise.
Taxonomies don't have to be taxing
Jan Gerlach
At Network of Centers annual meeting: Rob Faris from @BKCHarvard presenting a study on content moderation within English Wikipedia: "we tried to simplify what harmful speech means and make it easy to understand. It's neither easy to understand nor simple, as you can see." https://t.co/S1bxUkV0ET
I don’t know what I like here more: the diagram showing the taxonomy of harmful speech or Rob’s seeming adoration of it. (The link to his research is in the subsequent Twitter thread).
The writ is on the wall
This has been a big week for the former CPL Resources moderators bringing a case against Facebook for the psychological effects of moderating content (EiM #40, The content moderation capital of the world).
The writ has been filed for the lead plaintiffs, meaning Facebook will be asked to provide information relating to the case in the coming weeks. Vice News also reports that other moderators are set to join as the lawsuit ’snowballs’. Last week, The Times reported these new claimants were from Latvia, Germany and Portugal.
This case is already shaping up to be one of Facebook’s toughest challenges of 2020. I’ll keep returning to it.
Not forgetting...
Pinterest and another site, Know Worldwide, has cracked down on the language used to describe former slave plantations on its site, saying venues will not be allowed to use terms like ‘charming’ and elegant
Pinterest And The Knot Will Stop Promoting Wedding Content That Romanticizes Former Slave Plantations
Netzpolitik has done some strong reporting about TikTok’s moderation policies with lots of good detail about how shifts work from Berlin, Barcelona and Beijing. This week, it uncovered ‘exclusionary’ practices towards people with disabilities and members of the LGBTQ community, including limiting their reach. 
The FT has reversed back to evaluate Facebook’s oversight board and found a bunch of academics and policy people who, frankly, aren’t exactly effusive about it
Content moderation dilemma tests Big Tech’s reach | Financial Times
If you like what you’re reading and want to help me cover the costs of Everything in Moderation (£5 a month), why not get me a Ko-Fi :) Thanks! - @benwhitelaw
Did you enjoy this issue?
Ben from Everything in Moderation

A weekly newsletter about the policies, products, platforms and people shaping content moderation, now and in the future.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
United Kingdom