View profile

TiB 128: AI as an arms race; the depressing truth about organisations; scaling machine learning; and more...

Revue
 
This week: What happens if we can just keep scaling machine learning models; on AI development as a g
 
August 18 · Issue #128 · View online
Matt's Thoughts In Between
This week: What happens if we can just keep scaling machine learning models; on AI development as a geopolitical arms race; the depressing truth about organisations; and more…

Welcome new readers! Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
The second order effects of machine learning progress
What’s holding back progress in AI? One of the features of GPT-3, OpenAI’s natural language processing (NLP) model, that seems particularly important (and which we’ve discussed before) is how well this kind of model appears to be scaling. GPT-3 didn’t represent a technical breakthrough relative to GPT-2 so much as an engineering one: it’s a much larger model in terms of the number of parameters - and the results are much better.
Pseudonymous investment analyst Mule has an excellent new post on this theme. One interesting question is, if all that’s required to reach new heights of achievement in NLP is more computational power (“compute”), why haven’t we done it already? Mule links to a fascinating post (the comments are excellent too) that argues that the answer may simply be that until GPT-3 was released no one realised it was possible. But now they do, we might expect very rapid progress, especially from the tech giants for whom the required capex is a rounding error.
Mule suggests this has important second-order implications. The rise of models like GPT-3 (and its successors) are an extraordinary demand shock for compute; demand may increase by orders of magnitudes over the next decade. Mule sees a world where semiconductors are >1% of global GDP. Some companies are poised to benefit, but it will also make the geopolitics of semiconductors, which we’ve discussed recently, even more important. 
Is AI development a geopolitical "arms race"?
Georgetown’s Center for Security and Emerging Technology (CSET) has a new report on a favourite TiB topic - the framing of AI development as a geopolitical competition. CSET’s team looked at 4,000 English language articles since 2012 and examined how they use the “arms race” rhetorical framing to discuss countries’ approaches to investing in AI R&D.
The findings are interesting. The “arms race” framing accelerated rapidly from 2012, but peaked in 2015. Use of the metaphor varies by sector and country too. In the US, the tech and (perhaps unsurprisingly) defence sectors are most likely to talk of an arms race; among governments, it’s France and Russia who most commonly use this language.
Why does this matter? As CSET notes, arms race framing may make AI practitioners and regulators less likely to invest in collaboration and safety. More broadly, rhetorical framing affects behaviour, particularly among elites who lack familiarity with the underlying technical material, as politicians typically do with AI. The superb Jeff Ding has an excellent piece from a year ago on how this phenomenon shapes China analysis more generally. As he argues, the metaphors that “knowledge gatekeepers” use are important, so it’s important to understand (and challenge) them. 
Don't learn too much about your organisation!
Most of us work in organisations and it’s a convenient belief - both for the organisation and its members - to think of these as well ordered and legible. What happens when we’re forced to interrogate this belief and confront reality? Ruthanne Huising has a fascinating paper on this question (discovered via Ethan Mollick) - or see her accessible write up in HBR.
Huising ran an experiment where she got high performers in general management roles to undertake projects to redesign operations for their organisation as a whole. She found that the participants quickly realised that their organisations, far from being planned and ordered, were more like emergent phenomena. As Huising says, this realisation turned out to be both empowering and alienating - and had long lasting effects. 
Half the participants later chose to move out of general management and into organisational change roles. They cited “inventing the board, not just playing the game”. But many also referred to the depressing realisation that no one was actually in charge and that their agency was limited. There is a connection here to David Graeber’s idea of “bullshit jobs”, but I wonder if this also contributes to the popularity of entrepreneurship? If you’ve had a negative reaction to large organisations, the (illusion of the?) chance to start again with a blank sheet is enticing. 
Quick links
  1. Game, set, match? A machine learning-generated Wimbledon. Striking video.
  2. Bargain hunting. Excellent thread on which assets are cheap and expensive right now.
  3. Bargain hunting 2? The curious case of London’s low tech salaries.
  4. Vive la révolution? Interesting comment thread on the most “successful” coups in history.
  5. F*** the algorithm. A glimpse of the future of the politics of machine learning, captured in a short video.
Your feedback
Thanks for reading - and a special thank you to everyone who shared feedback on the murder mystery games I shared last week. I look forward to hearing more when you’ve had chance to play them…
If you enjoy Thoughts in Between, it’s easy and free to support it: forward this to a friend or, even better, share it on Twitter or Facebook.
Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary.
I’m always happy to hear from readers - feel free to hit reply or message me on Twitter.
Until next week,
Matt Clifford
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue