We talked back in February
about the socialist calculation debate
of the 1920s and whether “big data plus machine learning” would resolve it in the socialists’ favour. Byrne Hobart has an excellent post this week
on this - and whether Amazon is proof that “true communism has never been tried, because they didn’t have enough RAM”. I don’t think so, for the reasons discussed last time (above all this piece
), but it’s a superb essay.
Hobart’s broader point about the incentives of companies like Amazon that are vast enough to approximate whole national economies is fascinating. He introduces a provocative idea - the "altruism quotient”, which he defines as the share of GDP growth that accrues to a company’s market value (for Amazon, he calculates this to be an extraordinary 12%!) The point is that as a company becomes large relative to economic growth, it has an incentive to grow GDP generally.
As Hobart points out, this sounds great (hence “altruism quotient”), but has a dark side. It means Amazon also has an incentive to make as much of the economy measurable (or legible
) as possible. Hobart doesn’t expand much on why this might be bad - except for referencing James Scott’s Seeing Like a State
- but it reminds me of this superb Paul Christiano piece
(my favourite essay on AI safety) which argues that a fully measurable word is a realistic dystopia. Something to ponder as you wait for your next Prime delivery.