Section 230 shields internet companies from liability regarding third party content. The original idea here was around two scenarios:
- Someone posts something absolutely heinous on a social media site like Facebook. Facebook can’t be sued for that guys post (it’s not their content).
- Someone posts something absolutely heinous in the comment section of a blog. The blog owner removes the comment. The blog owner can’t be sued by the poster: it’s their right to allow/disallow what they want.
The intent, as we’ve said, was to encourage moderation by shielding companies (in the second scenario), but also by providing them some wiggle room to set their own standards (allowing the first scenario). That wiggle room creates much of the debate: what should platforms be responsible for fixing? And by responsible, do we mean legally responsible or just ethically so?
A few gray areas have emerged since Section 230 was passed that I think deserve clarification. There are cases in which, either due to the exchange of money or through the actions of a platform, content should no longer be considered strictly “third-party.” In other words, the platform itself becomes a party to the content in certain ways, and once that happens they should no longer enjoy the full benefits of Section 230. In these cases, platforms should be liable for their failure to moderate content and protected when they do in fact make moderation decisions (in other words, Scenario 2 stands, scenario 1 does not). The cases around which I think we should make this clarification are the following:
-
Paid Content: If you accept money to promote content, you become a party to that content. You may decline to run the ad (a protected content moderation decision), but if you do run the ad you are no longer shielded from liability around the content of the ad. You can be sued alongside the advertiser.
-
Promoted Content: If the platform (whether by human choice or automatic algorithmic decision making) promotes a piece of content (increases its circulation), the platform becomes a party to that content and shares liability for its contents. Demoting content (a content moderation decision) is still fully protected by Section 230.
-
Promoted Platform Constructs: Every platform has it’s own “landscape” of constructs: on Facebook you have friends, can join groups, follow pages. On Twitter, you have followers. On Pinterest you can follow both people and “boards.” Those platform constructs themselves can be “recommended” or “promoted”: here’s who we think you should follow, here’s a group we think you should join, etc. When a platform makes recommendations around those constructs (even though the specifics of the “group” or “profile” or “board” or what-have-you may be user generated), the platform becomes a party to that content by virtue of it’s recommendation of it. As such, they share some of the liability for that content.