View profile

How tech companies should address their workers’ PTSD

Revue
 
On Friday I published a report confirming what had been obvious to anyone who spends much time talkin
 
January 27 · Issue #446 · View online
The Interface
On Friday I published a report confirming what had been obvious to anyone who spends much time talking to people who work in content moderation: the job causes post-traumatic stress disorder. The report was based largely on an extraordinary document that Accenture, which sells its content moderation services to Facebook, YouTube, and Twitter, among others, requires employees to acknowledge that their work can lead to PTSD — and to tell their managers about any negative changes to their mental health. Labor law experts told me the document could be construed as an illegal requirement to disclose a disability.
At the time, I had managed to confirm only that the document was distributed to workers in Austin, TX, as part of Accenture’s contract with YouTube. A few hours after my report, the Financial Times reported that it had been distributed to moderators for Facebook in Europe as well. Around that time, I confirmed that workers on the Facebook project in Texas had also been asked to sign it. Facebook told me it was unaware of any documents that Accenture made its workers sign, but declined to comment.
Throughout my reporting, I attempted to pin down Accenture on which workers, exactly, it had warned about PTSD. The company’s PR team told me it regularly asked workers to sign “these types of documents,” but wouldn’t speak to the specific risk of PTSD. Indeed, no Accenture flack would ever use the word “PTSD” in an email to me.
But with the confirmation that the document was distributed to both YouTube and Facebook workers, it seems clear that the company has acknowledged that its workplace is unsafe for some portion of its workforce. As to how many people are affected, and which roles are most likely to result in long-term mental health issues, Accenture has refused all comment.
Whenever I write about these issues, people write to me to ask what the solution is. We will clearly need human moderators for the foreseeable future. How do we create jobs that safe for the maximum number of workers? After speaking with more than 100 moderators, academics, labor experts, and company executives, here are five things I wish companies would do.
First, invest in research. We know that content moderation leads to PTSD, but we don’t know the frequency with which the condition occurs, or the roles most at risk for debilitating mental health issues. Nor have they investigated what level of exposure to disturbing content might be considered “safe.” It seems likely that those with sustained exposure to the most disturbing kind of photos and videos — violence and child exploitation — would be at the highest risk for PTSD. But companies ought to fund research into the issue and publish it. They’ve already confirmed that these jobs make the workforce ill — they owe it to their workforce to understand how and why that happens.
Second, properly disclose the risk. Whenever I speak to a content moderator, I ask what the recruiter told them about the job. The results are all over the map. Some recruiters are quite straightforward in their explanations of how difficult the work is. Others actively lie to their recruits, telling them that they’re going to be working on marketing or some other more benign job. It’s my view that PTSD risk should be disclosed to workers in the job description. Companies should also explore suggesting that these jobs are not suitable for workers with existing mental health conditions that could be exacerbated by the work. Taking the approach that Accenture has — asking workers to acknowledge the risk only after they start the job — strikes me as completely backwards.
Third, set a lifetime cap for exposure to disturbing content. Companies should limit the amount of disturbing content a worker can view during a career in content moderation, using research-based guides to dictate safe levels of exposure. Determining those levels is likely going to be difficult — but companies owe it to their workforces to try.
Fourth, develop true career paths for content moderators. If you’re a police officer, you can be promoted from beat cop to detective to police chief. But if you’re policing the internet, you might be surprised to learn that content moderation is often a dead-end career. Maybe you’ll be promoted to “subject matter expert” and be paid a dollar more an hour. But workers rarely make the leap to other jobs they might be qualified for — particularly staff jobs at Facebook, Google, and Twitter, where they could make valuable contributions in policy, content analysis, trust and safety, customer support, and more.
If content moderation felt like the entry point to a career rather than a cul-de-sac, it would be a much better bargain for workers putting their health on the line. And every tech company would benefit from having workers at every level who have spent time on the front lines of user-generated content.
Fifth, offer mental health support to workers after they leave the job. One reason content moderation jobs offer a bad bargain to workers is that you never know when PTSD might strike. I’ve met workers who first developed symptoms after a year, and others who had their first panic attacks during training. Naturally, these employees are among the most likely to leave their jobs — either because they found other work, or because their job performance suffered and they were fired. But their symptoms will persist indefinitely — in December I profiled a former Google moderator who still had panic attacks two years after quitting. Tech companies need to treat these workers like the US government treats veterans, and offer them free (or heavily subsidized) mental health care for some extended period after they leave the job.
Not all will need or take advantage of it. But by offering post-employment support, these companies will send a powerful signal that they take the health of all their employees seriously. And given that these companies only function — and make billions — on the backs of their outsourced content moderators, taking good care of them during and after their tours of duty strikes me as the very least that their employers can do.

The Ratio
Governing
State attorneys general are meeting with US Justice Department attorneys next week to share information on their respective probes of Google. This move could lead to both groups joining forces on the investigation. John D. McKinnon, Ryan Tracy and Brent Kendall report:
The state and federal investigations have given considerable focus to Google’s powerful position in the lucrative market for online advertising. The company’s dominant position in online search and possible anticompetitive behavior by Google in its Android mobile operating system have also drawn scrutiny, according to the people familiar with the matter.
The planned meeting is likely to include discussions on those issues, the scope of the probes and the best division of labor as the investigations move forward, some of the people said.
In an extraordinary back-and-forth between a president and a congressman on Twitter, President Trump called Representative Adam Schiff, the lead House impeachment manager, “a CORRUPT POLITICIAN, and probably a very sick man,” warning, “He has not paid the price, yet, for what he has done to our Country!” The comments sparked controversy around what Twitter considers a threat. (Sheryl Gay Stolberg / The New York Times)
Hillary Clinton said Facebook has traded moral accountability for commercial gain. She added that Zuckerberg has been persuaded “that it’s to his and Facebook’s advantage not to cross Trump. That’s what I believe. And it just gives me a pit in my stomach.” (Adrienne LaFrance / The Atlantic)
Bernie Sanders supporters are mass-posting angry memes about his Democratic rivals on Facebook. The volume and viciousness of the attacks reflect how Facebook rewards emotionally charged content to generate reactions from its users. (Craig Timberg and Isaac Stanley-Becker / The Washington Post)
Here’s where all the US presidential candidates currently stand on breaking up Big Tech. A useful guide if you’re just catching up. (Elizabeth Culliford / Reuters)
Coordinated disinformation campaigns and deepfakes are making it harder to deal with existential threats like nuclear war and climate change, according to the Bulletin of Atomic Scientists. These concerns prompted the group to push the Doomsday Clock up to 100 seconds to midnight, a metaphor for the global apocalypse. (Joseph Marks / The Washington Post)
Teens are using TikTok to post memes and comedy about the Australian bushfires. The phenomenon shows how even a platform determined to avoid politics can find itself in the center of debate. (Rebecca Jennings / Vox.com)
More than 350 Amazon employees violated the company’s communications policy to talk about climate policy, Amazon’s work with federal agencies and its attempts to stifle dissent. They published their remarks on Medium. It’s the latest sign of worker unrest at tech giants spilling over into public view (Jay Greene / The Washington Post)
The Jeff Bezos phone hacking scandal has cast an unflattering light on the swiftly growing and highly secretive cottage industry of software developers specializing in digital surveillance. NSO Group, the surveillance firm implicated in the recent WhatsApp hacks, is one of the more notorious companies that operate in this space. (Ryan Gallagher / Bloomberg)
Related: federal prosecutors have evidence indicating that Jeff Bezos ’ girlfriend, Lauren Sanchez, sent text messages to her brother that he subsequently sold to the National Enquirer. The tabloid then published a story about the Amazon founder’s affair with Sanchez. (Joe Palazzolo and Corinne Ramey / The Wall Street Journal)
Fake news stories originally published as political satire are being copied and reposted as genuine news, then shared to large audiences on Facebook. The articles published by AJUAnews.com and similar websites include death hoaxes about celebrities and made-up stories about Democratic congresswomen wanting to slash entitlement programs. (Daniel Funke / PolitiFact)
A brigade of paratroopers deployed to the Middle East in the wake of mounting tensions with Iran have been asked to use Signal and Wickr on government-issued cell phones. The use of these commercially available encrypted messaging apps raises questions as to whether the Department of Defense is scrambling to fill gaps in potential security vulnerabilities. (Shawn Snow, Kyle Rempfer and Meghann Myers / Military Times)
Tech CEOs in Davos are dodging tough questions about election interference and misinformation by warning about artificial intelligence. They’re calling for standardized rules to govern the technology. (Amy Thomson and Natalia Drozdiak / Bloomberg)
A lawsuit challenging the constitutionality of FOSTA, a federal law that has driven marginalized communities and speech about sex and sex work offline, was reinstated. A federal judge had previously dismissed the case. Now, an appeals court has reversed that decision, signaling that the statute could a substantial threat to free speech. This is a good thing. (Electronic Frontier Foundation)
The World Economic Forum is flooding the internet with bad videos designed to convey the impression that billionaires care about inequality and climate change. “The videos feature a few boxes of text slapped across the screen, a few close-ups of people, a few wide-shots of landscapes of crowds, state the problem then offer solutions then a call to urgency — we get it.” (Edward Ongweso Jr / Vice)
Industry
The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.
Two years after Vine’s co-founder Dom Hofmann announced he was building a successor to the short-form video app, Byte made its debut on iOS and Android. The new app lets users shoot or upload and then share six-second videos. The tiny time limit necessitates no-filler content that’s denser than the maximum 1-minute clips on TikTok. (Josh Constine / TechCrunch)
Google suggests “husband” after searches for women’s names more often than it suggests “wife” for men. The company says results reflect what people are actually searching for. (Katie Notopoulos / BuzzFeed)
This photographer has become an influencer by shooting social media stars, including many of the most famous people on TikTok. Landing a photoshoot with him has now become one of the markers of viral fame. (Taylor Lorenz / The New York Times)
And finally ...
Alan Dershowitz Argues for Trump Cold Open - SNL
There’s a familiar face at the end of this week’s cold open on Saturday Night Live. It seems that hell has a new I.T. guy …
Talk to us
Send us tips, comments, questions, and your suggestions for helping content moderators: casey@theverge.com and zoe@theverge.com.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue