Angwin: I wanted to start with a big picture question—right now there’s a narrative in D.C. that there’s a tech cold war with China. Do you agree with that story line?
Ahmed: I disagree, but I do see how the actions of the U.S. government, Chinese government, and certainly some tech companies on both sides can be seen as creating a self-fulfilling prophecy of a tech cold war. I disagree with this as a framing of what we’re going through right now, and I think that it’s never too late to back away from this narrative.
The United States and China are extremely economically and, to an extent, even politically interdependent. Proposals for things like economic decoupling from China are extraordinarily difficult and harmful for both the U.S. and China. With the fear of China overtaking the United States and becoming economically and politically dominant, we are falling back on narratives that have been around for a long time to try and make an argument, to the world and domestically, that everyone will be worse off if China becomes the dominant global superpower. The Cold War analogy also benefits big tech companies by allowing them to deflect being regulated.
Angwin: Can you elaborate on how this narrative enables tech companies to avoid regulation?
Ahmed: To sink us right into the research, I’ve been working on an academic paper with my colleagues at the AI Now Institute, Lucy Suchman and Nantina Vgontzas, that is basically asking, “What outcomes does the tech cold war narrative produce out in the world?” We ended up focusing on three areas: tech policy, military AI, and labor.
For the tech policy piece, we look at things like all of the times tech executives such as Mark Zuckerberg were on the Hill and used what people are now calling the “China defense,” saying, “You know, if you regulate us, China wins the global ‘values war.’ So you really shouldn’t regulate Facebook and, by extension, other big tech companies.” And it’s easy for policymakers who are not following these extremely niche and complicated issues to sort of wonder, “Maybe he’s right, maybe if we do that, China wins,” without stepping back and asking what he means by a values war or if there is merit to his argument. We slip back into this Cold War mentality of communism versus capitalism, which is not the fight we’re in right now.
Others are using the threat of China’s military developments in AI and autonomous weapons as a diversion to avoid conversations about the ethics of parallel advancements in the U.S. The perception that the Chinese military will have autonomous weapons exceeding U.S. military capabilities very quickly is preventing any public deliberation about whether we even want autonomous warfare. Is there a way to create ethical warfighting systems?
Angwin: China has been enacting measures to rein in Big Tech, and some of its policies seem almost progressive. How would you describe what’s going on in China right now?
Ahmed: I always go back to the question of how will this legitimize the state’s power and image? I can never really call it progressive. I’ve seen plenty of coverage about how newly proposed regulations for algorithms are so much more forward thinking than anything coming out of the U.S. or Europe, but I think those analyses miss that this is still pretty experimental and reactive. We have yet to see if any of it will be codified into law or how they might be enforced.
For example, the Chinese public has been angry about
price discrimination there. Users on a variety of platforms—ride-hailing, hotel booking, and e-commerce—have done small experiments and realized, “Hey, my friend and I each ordered a car, and it turns out that I had to pay more for my taxi than they did. The companies might be using our data profiles to determine our willingness to pay and raise prices on us.” Despite this, you have companies denying that this is the case.
One of the
first of two recent documents on regulating algorithms that came out of the Cyberspace Administration of China specifically said that companies shouldn’t be allowed to use data to profile people for differential pricing. But that’s different from an auditing mechanism, or a way of documenting that this really is happening on these platforms, and then prosecuting them for it. Companies might continue to have plausible deniability until they are required to hand over data, or essentially explain the processes through which they price their products and services.
Angwin: As you mentioned, powerful players in the tech industry have emphasized the need to “win” this supposed tech race against China. Has this influenced the policies our government has chosen to pursue?
Ahmed: It’s been interesting to follow self-styled policy entrepreneurs who came out of the tech business world like [former Google CEO] Eric Schmidt. He was part of the National Security Commission on AI, which ended, and now his reincarnation is leading this
Special Competitive Studies Program.
Schmidt, Henry Kissinger, and Daniel Huttenlochner just came out with a book,
The Age of AI, which makes this sweeping argument about a future in which we need to be in control of AI. It’s the same talking points that Schmidt has been peddling for years: that a small group of technocratic elites has to make decisions about AI for their governments if we are to avert the worst possible results. There’s no talk of any kind of public consultation. If anything, there’s disdain for collective-movement-led thinking about the technological future we want. One of the biggest contradictions here is that the book allegedly envisions a world where AI is democratically governed. There’s also an omission of so many years of research that documents the harms of AI technologies. All of that is swept under the rug in this book.
And it’s just unfortunate that you can do so much with fear-mongering, given the general public’s and policymakers’ lack of understanding about technology, to scare people into thinking, “Yes, we should just listen to this small, select set of technocratic elites, and our opinions don’t matter about how technology should be used in our everyday lives.”
I’m not saying there are no threats in any of these areas and that the security concerns driving the tech cold war framing are all completely made up, but the responses we have seen coming out of the U.S. government so far have not been very careful. It’s kind of ironic how the “move fast and break things” mantra has been applied as a policy philosophy here. It’s a consequence of overemphasizing the threat and treating everything like it’s part of the threat, rather than thinking carefully about long-term repercussions about the choices we are making now.