Today is Ash Wednesday, a day on which we cultivate penitence and contrition for our sins. It is perhaps appropriate, then, to reflect upon our participation in the world Facebook has made—and upon whether we are complicit in the enormous, hidden psychological and emotional costs individuals are paying to maintain it. Many people will leave Facebook for Lent anyway; perhaps we would all do well to never come back.
That would be, I think, a reasonable practical conclusion to draw from The Verge’s recent expose on the suffering Facebook’s content moderators endure.
Facebook outsources the front-lines of moderation to independent companies, presumably to keep costs down by ensuring they don’t have to give workers performing such (putatively) menial tasks stock options. The description of the corporate environment in which such individuals work is troubling, to be sure, but the description of the emotional and mental trauma such individuals face is disturbing in the extreme. It is enough to awaken a strong sense of penitential remorse for ever feeding the Beast. (Whether such a disturbance will lead to True Repentance, though, remains to be seen. Probably not.)
The psychological burdens on the moderators stem from, it seems, two places: the ambiguity of Facebook’s guidelines, and the horrendous images that they are faced with on a regular basis. The former seems to enact a kind of mental attrition, as they are tasked with making decisions the result of which satisfies no one. The latter brings about trauma, pure and simple; some of the posts they see are unspeakable in their grossness.
The fact that these moderators are faced with such a demanding job, though, arises in part from choices Facebook made in designing the current platform. Facebook opened up possibilities for users that, if turned off, could seriously reduce the need for any moderators. Stay with me for a second: suppose that users only saw the posts from those whom they voluntarily followed and conversations between there mutual friends, and were not exposed to any third parties that they had not followed directly. This was how Facebook once was, not so long ago. Such a limitation would seriously impair posts from going ‘viral,’ because for someone to discover ‘content’ from a person they had not followed, they would have to find and follow them. In other words, suppose that the whole notion of the ‘public’ post disappeared in favor of a large network of overlapping—but still exclusive—relationships.
Within such a digital environment, it’s likely that horrendous content would still be posted and seen by others. We could not escape it altogether. But it seems to me that when such content arises, it would do so within the very mechanisms of accountability that are likely to work—not the accountability of a faceless, nameless corporate drone, but the public shame enacted by one’s real community. And if that doesn’t work, everyone would simply have the option of unfollowing and blocking the person, such that they are never so exposed by them again.
This, of course, requires some kind of mental and emotional fortitude on the part of Facebook users (hello!). Living in a world without being destroyed by the inherent vagueness of ‘offensive speech’ demands the ability to tell our neighbors the truth, and to learn to hear it. Our own mental and emotional fragility is the real source of the burdens being placed upon the poor moderators. They face in undiluted form what we should all be exposed to; they face that on our behalf, preserving us from encountering less frequent, even if equally potent doses of evil.
The natural response is, of course, that relying on the accountability of individuals’ real communities will never work. But it’s that collective sense of impotence in the face of evil that has to be broken for virtue to be learned. The word of the Gospel is: Thou mayest. We may learn to hold one another accountable, because we must.
In other words, Facebook should try to become more like the world, not less; its mistake is in attempting to create an idealistic community, which obscures us from some of the horrendous realities of what human beings are willing to say and do to one another. Were it to take this approach, its population would no doubt decrease, as the chance of ‘virality’ were closed off and local communities were tasked with dealing with the local problems (or collectively blocking them).
But it seems plausible that the platform might become better for it. It is not because I am optimistic that I think Facebook should allow its users to see what others are capable of, but because I am radically and undeniably pessimistic: it is because humans are terrible to one another that we cannot look away. The light shines in the darkness—not the dull, lifeless glow of Facebook with its content restrictions and its moderators, but the light which encounters the darkness in its horrific fury and yet does not look away.
But then, that is what Lent is about after all: not looking away. If Facebook allows us to do so, we must not: the burdens our use of Facebook imposes upon a small class of people are many. Facebook is the world that Facebook made—and that we made with it.
For this, and for all our sins, Oh Lord, we ask Thy mercy and Thy grace.