View profile

Proposal: A Few Simple Reforms to Section 230

The Soren Review
Proposal: A Few Simple Reforms to Section 230
By The Soren Review • Issue #4 • View online
Let’s Make it Clear that When Platforms Promote Content, They Become a Party to that Content

There have been a flurry of discussions about the need to reform Section 230, a US law shielding internet companies from liability for content posted by users (third-party content). The intent of Section 230 was to encourage content moderation by making it such that users couldn’t sue the companies for their moderation decisions. In practice, however, it has become a blanket shield for companies to not take responsibility for the content on their platforms. After the 2016 election, in which foreign interference largely took the form of misinformation spread via social media, Democrats in the US began discussing outright revocation of Section 230 as a way to force companies to police the content of their platforms. This, in fact, missed the point of the law (to encourage moderation) and would likely result in companies taking fewer moderation decisions, not more. Meanwhile, Republicans began crying foul over “censorship” by the platforms, cries only increased after the ban of President Trump following the January 6th insurrection and the much more active hand most of the platforms took against COVID-19 misinformation (which skewed to the right, politically). While their complaints were largely unfounded, their remedy stuck closer to the original intent of the law: they wanted to punish platforms for their moderation decisions through reforms and/or repeals of Section 230. New angles to the discussion are beginning to emerge as well: China is more actively entering the “misinformation” game alongside Russia and traditional media companies are increasingly seeing their business model (advertising) eaten alive by social media giants like Facebook, who, thanks to Section 230, don’t have the same obligations around controlling the content of their platforms (or the expenses that go along with them). I’ll be sharing links to much of the analysis I’ve been seeing of these issues tomorrow. Today, I want to share a few reform proposals of my own.
"Congressional Committee Hearing on Women and Technology" by IREX
"Congressional Committee Hearing on Women and Technology" by IREX
Clarifying What It Means for Content to Be “Third Party”
Section 230 shields internet companies from liability regarding third party content. The original idea here was around two scenarios:
  1. Someone posts something absolutely heinous on a social media site like Facebook. Facebook can’t be sued for that guys post (it’s not their content).
  2. Someone posts something absolutely heinous in the comment section of a blog. The blog owner removes the comment. The blog owner can’t be sued by the poster: it’s their right to allow/disallow what they want.
The intent, as we’ve said, was to encourage moderation by shielding companies (in the second scenario), but also by providing them some wiggle room to set their own standards (allowing the first scenario). That wiggle room creates much of the debate: what should platforms be responsible for fixing? And by responsible, do we mean legally responsible or just ethically so?
A few gray areas have emerged since Section 230 was passed that I think deserve clarification. There are cases in which, either due to the exchange of money or through the actions of a platform, content should no longer be considered strictly “third-party.” In other words, the platform itself becomes a party to the content in certain ways, and once that happens they should no longer enjoy the full benefits of Section 230. In these cases, platforms should be liable for their failure to moderate content and protected when they do in fact make moderation decisions (in other words, Scenario 2 stands, scenario 1 does not). The cases around which I think we should make this clarification are the following:
  1. Paid Content: If you accept money to promote content, you become a party to that content. You may decline to run the ad (a protected content moderation decision), but if you do run the ad you are no longer shielded from liability around the content of the ad. You can be sued alongside the advertiser.
  2. Promoted Content: If the platform (whether by human choice or automatic algorithmic decision making) promotes a piece of content (increases its circulation), the platform becomes a party to that content and shares liability for its contents. Demoting content (a content moderation decision) is still fully protected by Section 230.
  3. Promoted Platform Constructs: Every platform has it’s own “landscape” of constructs: on Facebook you have friends, can join groups, follow pages. On Twitter, you have followers. On Pinterest you can follow both people and “boards.” Those platform constructs themselves can be “recommended” or “promoted”: here’s who we think you should follow, here’s a group we think you should join, etc. When a platform makes recommendations around those constructs (even though the specifics of the “group” or “profile” or “board” or what-have-you may be user generated), the platform becomes a party to that content by virtue of it’s recommendation of it. As such, they share some of the liability for that content.
Clarification Around Legal Process
The essence of Section 230 is that platforms are not liable for content they did not produce. While I’ve proposed changing that some when the platforms either accept money for promoting content or decide (often based on an algorithm) to themselves promote or recommend certain content/constructs, on the whole this principle remains. If you posted it on Facebook, you are responsible for it. Facebook faces no liability for that content unless Facebook “decides” to join you in spreading/amplifying what you posted. So for content that is either (a) demoted or (b) simply left alone/not promoted/amplified, Section 230 remains the same as before. However, a recurring question around Section 230 has been how platforms must respond to legal actions pertaining to content. The Millennium Digital Copyright Act requires platforms have a process for responding to take-down requests regarding copyright infringement. But what about cases where a court rules that content is slanderous? Or where someone gets a restraining order against an abuser or stalker who has shared private/intimate content? Platforms have not always been responsive to such processes, but I propose a reform which requires them to police content which has been so adjudicated. Though the platform is not itself liable for the posted content (assuming it hasn’t promoted it in some way), it can be liable for failing to moderate that content if it has been presented with a legal decision around it.
The intended effect of these reforms is to inject a dose of responsibility into the platforms without changing the essence of the underlying principle that they are not liable for what is truly, strictly third-party content. It does so not by creating regulation around what categories must be moderated (which would certainly be a violation of the first amendment) but by clarifying that when a company makes themselves a party to content, they can’t expect to still get to pass the buck. The courts will have a busy time deciding what counts as true “damage” to a party from content that was promoted by a platform. But as standards for that emerge, companies will make risk-adjusted decisions about what degree of liability they are willing to accept and adapt their algorithms, advertising reviews, and content moderation practices accordingly.
Did you enjoy this issue?
The Soren Review

News, analysis, and opinion on tech policy, governance, security, and economics.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue