Scott Alexander has
a fascinating blog post on teaching
GPT-2, OpenAI’s Natural Language Processing model (which we’ve
discussed before), to play chess. The model doesn’t have a concept of a chess board or the rules; it’s learned to play entirely by reading millions of games written out in chess notation. As a result, it makes a lot of mistakes, but it’s still a striking result.
How much should we be impressed by this? As Alexander notes, it’s hard to say. Gary Marcus, an AI pioneer and noted deep learning skeptic,
is, well, skeptical. But even if you don’t think this is a meaningful AI milestone, there’s lots to ponder here.
Matt Levine’s take on the implications for finance made me think. Levine notes that “black box” trading algorithms are criticised for opacity and, facetiously, suggests that a new approach would be to train GPT-2 on written stock recommendations and get it to write the human-legible investment memo for you.
The joke, of course, is that while such a memo might
sound plausible, the reasons it gave for investing in a stock wouldn’t be the
actual reasons for the model selecting the stock in a causal sense. We’d have “
explainable AI” with spurious explanations. But is that so different from the human mind? Dan Sperber and Hugo Mercier’s excellent book,
The Enigma of Reason, suggests the primary evolutionary purpose of human reason is not to make better decisions, but to convincingly justify decisions after the fact. Perhaps in the space of
possible minds, GPT-2 is closer to us than to something truly alien.