Angwin: Facebook whistleblower Francis Haugen has brought to the public’s minds this idea that algorithmic amplification is dangerous. And so I wanted to just start with a very simple question: What is the danger of algorithmic amplification?
Kornbluh: Francis Haugen helped shift this debate from content—what content is bad, who decides, who takes it down—to the business model of the platforms, the processes of the platforms, and the fact that they are not neutral.
I think there has long been this idea that the internet or social media holds up a mirror and people aren’t all good. But her data showed that the algorithms which are attuned to keeping you online are also often amplifying things that are agitating and they are increasing agitation.
The great anecdote she told was about
European political leaders coming to talk to Facebook and saying, “We’ve noticed now with the change in your algorithm in 2018, that if we don’t have incendiary content, it gets no distribution. So you’re forcing us to be more incendiary, to prey on people’s fears and anger.” That was really remarkable, and it has really shifted the debate in important ways.
Angwin: So how do we fix these algorithmic amplification problems?
Kornbluh: I think the first thing that the whistleblower has shown us is we need transparency, transparency, transparency. And it’s not enough to make data available to researchers. That is necessary but not sufficient.
What other industry do we have that is this important where regulators do not have access to data? If a plane crashes, there is a black box flight data recorder that the Transportation Safety Board gets to look at to see what went wrong and determine if we need to change safety protocols in airplanes. The EPA gets data on polluters. The FDA gets data on drug manufacturers.
But the only reason we know that in 2016 the Russians targeted African Americans on Facebook was because the Senate Intelligence Committee made the platform give up data to a bunch of researchers.
The
Facebook Files showed that even though the platforms state their terms of service, 5.8 million people on Facebook, with some of the largest footprints on the platform, aren’t required to follow those terms of service. Some of the research that she revealed showed that, according to Facebook itself, they catch
3 to 5 percent of hate speech that violates its own terms of service.
We shouldn’t have to rely on leaks of their own analysis of what’s happening on their platform. The regulators themselves need to be able to get access to some data. It’s not acceptable that we get the numerator and not the denominator of the portion of harmful content that violates the terms of service that is taken down.
Angwin: Last week, House Democrats proposed legislation, the Justice Against Malicious Algorithms Act, that aims to regulate algorithms. The bill revokes the immunity given to social media companies under Section 230 of the Communications Decency Act of 1996 if they “recklessly made a personalized recommendation” and the recommendation “materially contributed to a physical or severe emotional injury to any person.” Do you think this proposal could help?
Kornbluh: After the whistleblower’s testimony, a bunch of people said, “Oh, it’s just like the tobacco industry, they had the research, they were negligent, let’s sue them.” But the solution people took for the tobacco industry, you can’t take here because you can’t sue them.
This bill just means they can be sued if there’s physical harm or emotional harm, and they knew about it or should have known about it. Because the First Amendment is so strong in the U.S., not that many people win libel cases. So it’s not obvious that these [lawsuits] will work. But it puts the platforms on notice that if their algorithms are involved, and they knew or should have known and it caused real harm, then they can be sued and held liable.
Angwin: Amending Section 230 isn’t the only way to address algorithmic amplification. You have proposed some other options. Can you explain your proposals?
Kornbluh: Ellen Goodman and I have written a
roadmap for regulating the internet. The three buckets that we talk about are: What could regulators do through legislation or just using their own authority? What do you need the platforms to do through a code of conduct? And what do we do to amplify important civic content?
When it comes to the internet, we sort of forgot how to regulate or what regulation is, and self-regulation has come to mean that every platform just does what it wants in secret with no transparency.
But the agencies can exercise the authority that they have. The FTC could start an investigation. The Consumer Financial Protection Board just started an investigation on the social media companies and their financial products. The DOJ could be looking into some of the civil rights issues, the gender-based violence that happens online. There is a bunch of regulatory persuasion and direct regulatory authority that could be deployed.
Of course, the FTC is so outgunned. They don’t have enough money, they don’t have enough staff, they don’t have enough expertise, and the technology is moving really fast. That’s why we come down to the need for some kind of digital code of conduct, to get the platforms to do some of this work themselves under the regulatory supervision of the FTC. The industry could get together and spell out what would be good enough steps to not be considered negligent, and then that can be used as a safe harbor. The industry would work out the details, but the regulators would decide if it’s enough and audit to make sure they’re complying.
Originally we called for a new agency because we felt there needed to be something that wasn’t limited to focusing only on consumer harms, because these harms are broader. However, we’ve sort of moved away from that because it became a bit of a distraction. People were seeing it as a ministry of truth.
And finally, how do we amplify civic content? After the Second World War, PBS, NPR, and AP were created with this public interest mission, because everyone saw how fascism had been powered by propaganda on centralized media. We feel like there needs to be a platform—not to replace PBS, NPR, and AP, but to help those kinds of outlets that follow journalistic standards and the frontline information providers, the scientists, figure out how to amplify their work on the internet.