Georgetown’s Center for Security and Emerging Technology (CSET) has a new report
on a favourite TiB
topic - the framing of AI development as a geopolitical competition. CSET’s team looked at 4,000 English language articles since 2012 and examined how they use the “arms race” rhetorical framing to discuss countries’ approaches to investing in AI R&D.
The findings are interesting. The “arms race” framing accelerated rapidly from 2012, but peaked in 2015. Use of the metaphor varies by sector and country too. In the US, the tech and (perhaps unsurprisingly) defence sectors are most likely to talk of an arms race; among governments, it’s France and Russia who most commonly use this language.
Why does this matter? As CSET notes, arms race framing may make AI practitioners and regulators less likely to invest in collaboration and safety
. More broadly, rhetorical framing affects behaviour, particularly among elites who lack familiarity with the underlying technical material, as politicians typically do with AI. The superb Jeff Ding has an excellent piece
from a year ago on how this phenomenon shapes China analysis more generally. As he argues, the metaphors that “knowledge gatekeepers” use are important, so it’s important to understand (and challenge) them.