|
|
February 12 · Issue #81 · View online |
|
The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News. The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it. Wired’s story dominated discussion online today, and for good reason. If the basic arc of the story is familiar — particularly to readers of The Interface — it’s stuffed with new details about how external events have played out inside Facebook. (The best is an account of how Facebook set up a meeting of conservative blowhards in the wake of Gizmodo’s story in hopes that they would turn on each other and decide that Facebook should exert as little editorial control over the platform as possible; the blowhards proceeded to do just that.) And yet I’m struck how, in retrospect, the story that helped to trigger our current anxieties had the problem exactly wrong. The story offered a dire warning that Facebook exerted too much editorial control, in the one narrow section of the site where it actually employed human editors, when in fact the problem underlying our global misinformation crisis is that it exerted too little. Gizmodo’s story further declared that Facebook had become hostile to conservative viewpoints when in fact conservative viewpoints — and conservative hoaxes — were thriving across the platform. Last month, NewsWhip published a list of the most-engaged publishers on Facebook. The no. 1 company posted more than 49,000 times in December alone, earning 21 million likes, comments, and shares. That publisher was Fox News. And the idea that Facebook suppresses the sharing of conservative news now seems very quaint indeed.
|
|
Inside Facebook's Hellish Two Years—and Mark Zuckerberg's Struggle to Fix it All
Seriously, read the whole thing! Among other things, it furthers the idea that “time well spent” is the next big battle in tech, which we told you about last month.
|
|
Facebook broke German privacy laws, court rules
This seems like a big deal. A German court ruled last month that Facebook’s default privacy settings don’t do enough to protect users. This includes requiring users to use their real names on Facebook — a foundational (and controversial) aspect of the service. If Facebook can’t require real names, that could have implications for everything from fake news (easier to create a dummy account?) to advertising (targeting becomes less accurate?) The court agreed that Facebook had not done enough to alert people to the fact that it had pre-ticked several privacy settings. These included an option to share their location with the person they were chatting to, and agreement that Google and other sites could show links to their profiles in search results. In addition, the court ruled that a requirement that users provide their real names was unlawful. It also decided that the social network needed to gain more explicit consent before it could use members’ names and profile pictures in commercial and sponsored materials.
|
He Predicted The 2016 Fake News Crisis. Now He's Worried About An Information Apocalypse.
Charlie Warzel takes the current anxiety around deepfakes and projects it into the future, where he finds lots of good reasons to worry. I’m also worried, but not panicked: the rise of Photoshop could have triggered an “Infocalypse,” and yet we’ve been able to discern fact from fiction in the images we see. Still: We’re closer than one might think to a potential “Infocalypse.” Already available tools for audio and video manipulation have begun to look like a potential fake news Manhattan Project. In the murky corners of the internet, people have begun using machine learning algorithms and open-source software to easily create pornographic videos that realistically superimpose the faces of celebrities — or anyone for that matter — on the adult actors’ bodies. At institutions like Stanford, technologists have built programs that that combine and mix recorded video footage with real-time face tracking to manipulate video. Similarly, at the University of Washington computer scientists successfully built a program capable of “ turning audio clips into a realistic, lip-synced video of the person speaking those words.” As proof of concept, both the teams manipulated broadcast video to make world leaders appear to say things they never actually said.
|
Liberals And Conservatives Are Being Fooled By Conspiracy Theories About The Russian Plane Crash
And speaking of the Infocalypse, here’s today’s example of a calamity that immediately generates hoaxes and conspiracy theories of every political stripe. Harvard Prof. Laurence Tribe is sorry for what he tweeted! “What that teaches me is that no matter how plausible something sounds, anything that one does that might give credence to a false and unverified report is dangerous,” he said. “It’s an object lesson about how one can get taken in.”
|
Google Autocomplete Suggestions Are Still Racist, Sexist, and Science-Denying
Not good: Indeed, almost a year after removing the “are jews evil?” prompt, Google search still drags up a range of awful autocomplete suggestions for queries related to gender, race, religion, and Adolf Hitler. Google appears still unable to effectively police results that are offensive, and potentially dangerous—especially on a platform that two billion people rely on for information.
|
Trump Supporters Spread the Majority of Phony News on Social Media
This is one reason why I was always so skeptical of the idea that Facebook suppressed conservative news; right-wing publishers have always thrived on social media. Here’s some new research on the subject from Oxford University’s Computational Propaganda Research Project: The group’s new findings are based on study of more than 13,000 Twitter accounts representing politically diverse viewpoints, including just under 2,000 pro-Trump accounts—which were identified by terms like #MAGA included on their Twitter profiles and explicitly pro-Trump content they have shared. The Oxford researchers found that those pro-Trump accounts, though comprising less than a sixth of the total accounts, were responsible for 55 percent of the “junk news” tweeted out from all 13,000 accounts, studied during the period of October 20, 2017 to January 18, 2018. The researchers also studied content from more than 47,000 public Facebook pages during the same 90-day period; they determined that about 60 percent of the total “junk news” links were posted by users that appeared to be aligned with the political far right. (The research doesn’t address whether any of these Twitter and Facebook accounts may be controlled by bots or other deceptive online operators.)
|
Brazil's biggest newspaper pulls content from Facebook after algorithm change
Folha de Sao Paulo is Brazil’s biggest newspaper. It has more than 6 million followers on the network. Last week it announced it would no longer regularly publish articles there, saying it wasn’t worth the trouble. The editor blames the recent algorithmic shift to “meaningful content”: “In effectively banning professional journalism from its pages in favour of personal content and opening space for ‘fake news’ to proliferate, Facebook became inhospitable terrain for those who want to offer quality content like ours,” he said. Dávila said the paper’s move reflected “the declining importance of Facebook to our readers”, but the algorithm change had been the deciding factor, he added.
|
YouTube exec addresses Logan Paul controversy and rising creator frustrations
“We don’t want to make a rash decision that impacts so many people’s livelihoods,” says Robert Kyncl.
|
|
Unilever Threatens to Reduce Ad Spending on Tech Platforms That Don’t Combat Divisive Content
The maker of Lipton Ice Tea and Dove soap has had it with fake news and now he is threatening to pull back his ad dollars. Even if you assume this is just posturing to extract some sort of concessions from a powerful vendor, as I do, this is still wild! Unilever is threatening to pull back its advertising from popular tech platforms, including YouTube and Facebook Inc., if they don’t do more to combat the spread of fake news, hate speech and divisive content. “Unilever will not invest in platforms or environments that do not protect our children or which create division in society, and promote anger or hate,” Unilever Chief Marketing Officer Keith Weed is expected to say Monday during the Interactive Advertising Bureau’s annual leadership meeting in Palm Desert, Calif. “We will prioritize investing only in responsible platforms that are committed to creating a positive impact in society,” he will say, according to prepared remarks.
|
YouTube exec addresses Logan Paul controversy and rising creator frustrations
My fellow Casey N., Casey Neistat, scores an interview with YouTube’s Robert Kyncl to talk about the company’s response to Logan Paul and other platform hooligans: Kyncl — who is responsible for creators and content partners — says YouTube is focused on finding ways for advertisers and content creators to succeed together. That explains some of YouTube’s decision to make its partner program requirements stricter, which includes 4,000 hours of watch time over the past 12 months, and at least 1,000 subscribers. “We think that this level is high enough for us to learn about the partner so that we can turn the ads on them and not disappoint advertisers, and at the same time it’s no so far out that it will be untenable and unreachable for [YouTubers],” Kyncl says. “We want as many creators monetizing, but we also want to make sure that everybody who’s monetizing is doing the right thing, and is protected. Because if somebody doesn’t do the right thing in there, and advertisers react in a certain way, then all of you get punished. And that’s not a good outcome.”
|
Facebook lost around 2.8 million U.S. users under 25 last year. 2018 won’t be much better.
The good news is that Facebook is losing users to a platform that Facebook already owns! Despite the expected decline in younger users, eMarketer believes Facebook’s overall U.S. audience will continue to grow for the next few years. More importantly, perhaps, eMarketer expects Facebook-owned Instagram to grow significantly. The research firm believes Instagram’s U.S. user base will grow by 13 percent this year, to almost 105 million people.
|
Facebook patents tech to determine social class
This seems like something Facebook’s advertising tools do in lots of other ways already. But: Facebook’s patent plan for “Socioeconomic Group Classification Based on User Features” uses different data sources and qualifiers to determine whether a user is “working class,” “middle class,” or “upper class.” It uses things like a user’s home ownership status, education, number of gadgets owned, and how much they use the internet, among other factors. If you have one gadget and don’t use the internet much, in Facebook’s eyes you’re probably a poor person. Facebook’s application says the algorithm is intended for use by “third parties to increase awareness about products or services to online system users.” Examples given include corporations and charities.
|
Snap VP of Sales Leaves the Company
Passing through the revolving door at Snap today is Mr. Jeff Lucas: Lucas became the seventh executive to leave the company since its IPO in March of last year. His departure comes less than a month after Tom Conrad, former VP of Product, announced his plans to leave Snap and the tech industry altogether.
|
Snapchat’s New Update Triggers Revolt by Millions of Teens
Lots of vocal users say they hate the Snapchat redesign. One of them tweeted a fake DM interaction with the Snapchat Twitter account in which “Snapchat” said the company would revert to its original design if the poster could get 50,000 retweets. His tweet currently has more than 1.3 million retweets. God bless Taylor Lorenz for tracking this hooligan down and getting him to say that spreading fake news “sends a good message” to Snap. Millennials! Isaac Svobodny the 20-year-old in Minnesota who gave voice to the backlash with his viral tweet, said that even though most of the 1.3 million people who retweeted his message knew it was fake and that Snapchat never agreed to revert back for a certain number of retweets, he still views it as a successful way to get the message out and, hopefully, force the company to reconsider. “All we can do is hope,” he told The Daily Beast. “I think this sends a good message to Snapchat. They will see this and see that if there’s over 1.3 million people who aren’t happy with the update they should try to make some modifications. “Even though people know it’s fake I think it sends a good message that people are pissed off about it and want the old format back,” he added.
|
|
|
Facebook is pushing its data-tracking Onavo VPN within its main mobile app
Facebook acquired Onavo, which generates valuable data about which apps people are using and for how long, to serve as an early-warning system when a new social app was threatening to outflank it. (Onavo is the reason Facebook cloned Houseparty before most people had ever heard of it, for example.) Now Onavo is getting some limited promotion inside Facebook itself: Onavo Protect, the VPN client from the data-security app maker acquired by Facebook back in 2013, has now popped up in the Facebook app itself, under the banner “Protect” in the navigation menu. Clicking through on “Protect” will redirect Facebook users to the “Onavo Protect – VPN Security” app’s listing on the App Store. We’re currently seeing this option on iOS only, which may indicate it’s more of a test than a full rollout here in the U.S. It’s unclear what percentage of Facebook’s user base is seeing the option, or which markets may have had this listing before, as there’s been little reporting on the feature.
|
Instagram is testing screenshot alerts for stories
NARC: Instagram is testing a feature that will show users when someone else takes a screenshot of their story. Users included in the test are getting a warning..
|
You can now watch Snap Maps on the web
When news breaks, there’s often very good video of it on Snapchat — but it’s somewhat buried in the app. Bringing it to the web will make it easier for average people to find it, but it will be most useful in showcasing it to journalists, who can now link to individual snaps when covering events.
|
|
Facebook has a Big Tobacco Problem
Frederic Filloux picks up the tobacco industry analogy: The comparison seems exaggerated, but parallels do exist. Facebook’s management has a long track record of sheer cynicism. Behind the usual vanilla-coated mottos, “bringing people closer together” and “building community”, lies an implacable machine, built from day one to be addictive, thanks to millions of cleverly arranged filter bubbles. Facebook never sought to be the vector of in-depth knowledge for its users, or a mind-opener to a holistic view of the world. Quite the opposite. It encouraged everyone (news publishers for instance) to produce and distribute the shallowest possible content, loaded with cheap emotion, to stimulate sharing. It fostered the development of cognitive Petri dishes in which people are guarded against any adverse opinion or viewpoint, locking users in an endless feedback loop that has become harmful to democracy. Facebook knew precisely what it was building: a novel social system based on raw impulse, designed to feed an advertising monster that even took advantage of racism and social selectiveness.
|
|
Limiting Your Child's Fire Time: A Guide for Concerned Paleolithic Parents
Rachel Klein takes our smartphone anxieties back to the Paleolithic: According to the most recent cave drawings, children nowadays are using fire more than ever before. And it’s no wonder: fire has many wonderful applications, such as cooking meat, warming the home, and warding off wild animals in the night. We adult Homo erectus, with our enlarged brains and experience of pre-fire days, can moderate our use, but our children—some of whom never lived during a time when you couldn’t simply strike two rocks together for an hour over a pile of dried grass to eventually produce a spark that, with gentle coaxing, might grow into a roaring flame—can have difficulty self-monitoring their interactions with fire.
|
|
Questions? Comments? Huntington Beach recommendations?
|
Did you enjoy this issue?
|
|
|
|
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|
|
|
|