Jack Clark (whose ImportAI newsletter
is a must read) and Gillian Hadfield
of OpenAI have an interesting new paper
on creating new regulatory markets for AI safety. Overall, I don’t buy the idea, but it’s a stimulating read on an important topic. The premise is that we live in an increasingly machine learning-dominated world, but existing government regulators lack the resources and technical knowledge to effectively police AI companies.
Clark and Hadfield propose creating a market for private regulators: governments would determine the desired outcomes; the private regulators would be free to develop inspection and enforcement mechanisms; and AI companies could shop around to choose their regulator (but could not opt to be unregulated!). The argument is above all about talent: they believe their proposal could draw great technical talent into the regulatory sphere, which they see as necessary for effective regulation in such a complex space.
That’s persuasive, but, as the paper acknowledges, lots of other problems remain. It presents several good case studies of private regulation - from the UK Legal Services Board
(an interesting, but as yet unfulfilled, example of an attempt to create regulatory competition) to the credit rating agencies (whose role in 2008 was broadly disastrous). But if private regulators have to compete for business, isn’t the incentive to make life easy for the “customer” while obscuring systemic risk? And if governments lack the knowledge to regulate AI, they may also struggle to identify AI regulatory capture. “Who regulates the regulators?”, as Juvenal didn’t say