OpenAI, the AI research company, released a new unsupervised machine learning model
that can generate impressively human-sounding paragraphs of text (good semi-technical discussion here
). It’s an interesting contribution if you’re an AI researcher, but what made it mainstream news
was OpenAI’s claim that the full model was too dangerous to release publicly.
OpenAI’s CTO shared
one example which, perhaps, illustrates the potential for “malicious applications” (see also
). Nevertheless, there’s been a good deal of criticism and debate from within and without the AI community. Some prominent researchers have accused OpenAI of whipping up unhelpful hysteria. Zac Lipton wrote a fairly searing critique here
, as did Stephen Merity here
, though, is a useful defence from Joshua Achiam of OpenAI.
I remember when OpenAI launched in December 2015, Scott Alexander published an excellent and pessimistic essay
on its risks. The part he was most worried about was the “open” bit:
Elon Musk famously said that AIs are “potentially more dangerous than nukes”. He’s right – so AI probably shouldn’t be open source any more than nukes should.
It’s interesting to see OpenAI change tack on this, as Alexander predicted and hoped. Even if you see this as a publicity stunt, it’s a new milestone in public AI discourse and one worth pondering.