On Sunday evening, 60 Minutes correspondent Lesley Stahl sat down with YouTube CEO Susan Wojcicki to
|
December 2 · Issue #422 · View online |
|
On Sunday evening, 60 Minutes correspondent Lesley Stahl sat down with YouTube CEO Susan Wojcicki to voice a now-familiar critique: YouTube allows too many dangerous and disturbing videos to remain on the site. She brings up a distorted video of Rep. Nancy Pelosi that falsely describes her as drunk; altered copies of the Christchurch shooting video, quack science, and misleading political ads, among other questionable videos found on the site. It leads to the following exchange: Lesley Stahl: The struggle for Wojcicki is policing the site, while keeping YouTube an open platform. Susan Wojcicki: You can go too far and that can become censorship. And so we have been working really hard to figure out what’s the right way to balance responsibility with freedom of speech. Stahl: But the private sector is not legally beholden to the First Amendment. As it so happens, some countries are trying to make tech platforms legally beholden to police speech according to national laws. One of them is Singapore, where in October a new law went into effect with the stated purpose of fighting “fake news.” James Griffith wrote about the law for CNN: Under the Protection from Online Falsehoods and Manipulation Bill, it is now illegal to spread “false statements of fact” under circumstances in which that information is deemed “prejudicial” to Singapore’s security, public safety, “public tranquility,” or to the “friendly relations of Singapore with other countries,” among numerous other topics. Government ministers can decide whether to order something deemed fake news to be taken down, or for a correction to be put up alongside it. They can also order technology companies such as Facebook and Google — both of which opposed the bill during its fast-tracked process through parliament — to block accounts or sites spreading false information. Those government ministers wasted little time in enforcing that law, taking action twice in the past week. And if you had to guess, what type of social media post would spur them into action the fastest? Would it be a post that spread hate speech or promoted violence? Would it be a post that spread harmful misinformation, such as a false election date intended to mislead voters? Or would it be a post that criticized the government? If you guessed No. 3, then you’ve been paying attention to the arguments that every single critic of this law has made since it was first proposed. Here’s Griffiths again, from Saturday: One offending item was a Facebook post by an opposition politician that questioned the governance of the city-state’s sovereign wealth funds and some of their investment decisions. The other post was published by an Australia-based blog that claimed police had arrested a “whistleblower” who “exposed” a political candidate’s religious affiliations. In both cases, Singapore officials ordered the accused to include the government’s rebuttal at the top of their posts. The government announcements were accompanied by screenshots of the original posts with the word “FALSE” stamped in giant letters across them. Facebook said on Saturday it had issued a correction notice on a user’s post at the request of the Singapore government, but called for a measured approach to the implementation of a new “fake news” law in the city-state. “Facebook is legally required to tell you that the Singapore government says this post has false information,” said the notice, which is visible only to Singapore users. It’s hard to think of a more dismissive way of phrasing that, short of maybe describing the Singapore government as a sniveling mosh pit of baby clowns. But that description would also presumably be in violation of the Protection from Online Falsehoods and Manipulation Bill. Last week, Sacha Baron Cohen made the case —although not in so many words — that the United States needs its own version of Singapore’s law. Like Stahl, he questioned the value of Section 230 of the Communications Decency Act. And he suggested that tech platforms should be held liable for what their users post. He did so out of legitimate concern over the dangerous misinformation and hate speech that really does spread on these platforms — and out of frustration that they are currently not held accountable for any of it. But the lesson of Singapore is that the fake-news law you want probably won’t be used in the way that you want. In fact, it may be used in ways that you don’t want at all! Granted, just because one country implemented a law this way doesn’t mean that Western democracies will. But if you think that they won’t … why, exactly? In the United States, the First Amendment may offer some protections to average citizens who want to criticize their government online. Others won’t be as lucky. And as the FOSTA-SESTA debacle showed, even the United States is not immune to terrible consequences from noble-sounding speech regulation. As the debate over Section 230 rages on, that’s something we ought to keep in mind.
|
|
|
In our last edition, I wrote somewhat flippantly that “YouTube has had such a rough year that I struggled to come up with a major product or policy win.” YouTube wrote in to say, not unfairly, that is has indeed has some wins this year. Among them: Just a few examples: our updated hate speech policy, which resulted in not just thousands of accounts coming down at launch, but 5x spikes in video removals and channel terminations; a reduction in our violative view rate by 80% over the past 18 months; changes to the way recommendations work resulting in a 50% drop in watchtime on borderline content in the US (and that # is about to go up); a suite of tools that is helping creators successfully diversify their revenue streams; and improvements to the way copyright claims work, solving a top pain point for creators. One reason I think that some of these moves haven’t resonated is that they feel so abstract. If YouTube has taken down five times as many videos this year, how much of the problem is solved? How much is left to go? It all still feels quite mysterious. Still, incremental progress is the actual way that most big tech problems get solved. So: point taken.
|
|
Today in news that could affect public perception of the big tech platforms.
|
|
TikTok, however, said it had penalized her not for her comments about China, but rather for a video she had shared earlier — a short clip, posted to a different account, that included a photo of Osama bin Laden. Aziz’s video violated the company’s policies against terrorist content, TikTok said, so the company took action against her device, making any of her other accounts unavailable on that device. TikTok said her videos about China did not violate its rules, had not been removed and had been viewed more than a million times. But the video in question — a copy of which she shared with The Post — actually was a comedic video about dating that the company had misinterpreted as terrorism, Aziz said. By Wednesday evening, TikTok had reversed course: The company said it restored her ability to access her account on her personal device. TikTok also acknowledged that her video about China had been removed for 50 minutes on Wednesday morning, which it attributed to a “human moderation error.”
|
|
For nearly a decade, its flagship website, Match, has issued statements and signed agreements promising to protect users from sexual predators. The site has a policy of screening customers against government sex offender registries. But over this same period, as Match evolved into the publicly traded Match Group and bought its competitors, the company hasn’t extended this practice across its platforms — including Plenty of Fish, its second most popular dating app. The lack of a uniform policy allows convicted and accused perpetrators to access Match Group apps and leaves users vulnerable to sexual assault, a 16-month investigation by Columbia Journalism Investigations found. Match first agreed to screen for registered sex offenders in 2011 after Carole Markin made it her mission to improve its safety practices. The site had connected her with a six-time convicted rapist who, she told police, had raped her on their second date. Markin sued the company to push for regular registry checks. The Harvard-educated entertainment executive held a high-profile press conference to unveil her lawsuit. Within months, Match’s lawyers told the judge that “a screening process has been initiated,” records show. After the settlement, the company’s attorneys declared the site was “checking subscribers against state and national sex offender registries.”
New on Instagram: messy bedrooms. “People aren’t interested in seeing this perfectly curated grid. It’s about giving yourself permission to be a little bit more human,” said one writer quoted in this article. (Julie Vadnal / Elle)
|
|
|
|
|
Sad to be leaving the continent…for now. Africa will define the future (especially the bitcoin one!). Not sure where yet, but I’ll be living here for 3-6 months mid 2020. Grateful I was able to experience a small part. 🌍 https://t.co/9VqgbhCXWd
|
11:39 AM - 27 Nov 2019
|
|
|
|
|
Did you enjoy this issue?
|
|
|
|
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|
|