You will have read the Future of Life Institute warning against an autonomous weapons arms race.
. Predictably this was picked up as a warning against Terminator-like killer robots. But as we’ve read in several issues of AED, and as is argued in the singularity link above, we are likely still decades away from technology maturing sufficiently to pose a present existential threat.
However the argument made by the FLI is more nuanced: that the rise of autonomous system reduces the cost of going to war (in terms of the human costs borne by the aggressors). War or violence will thus become increasing attractive as a policy option.
From the beginning, the primary interest in nuclear technology was the “inexhaustible supply of energy” … I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence.
I find this analogy useful. It is true the AI cat is, at some level, out of the bag. Drones will get more autonomous capabilities, and rules of engagement will change to encompass a perceived increase in the sophistication of the autonomous control logic. And it’s also the case that reducing the human and psychological costs of inflicting violence makes it a more appealing tool to inflict on others.
And we know that supervision, broad transnational agreement and mindfulness can successfully reduce the proliferation of new weapons technologies. The Nuclear Non-Proliferation Treaty is a good example of something that has been broadly successful for 50 years. So keeping the discussion alive and front of mind as the FLI has done seems like a reasonable step.
The terrifying scenario of an existential threat is very alluring. But it can also mask other interesting ethical questions that arise by a broadening of the processing complexity of artificial intelligence systems.
The boundary is already being tested with our ideas of non-human personhood
. If we apply personhood to non-humans, wouldn’t AIs be included? And if we don’t apply personhood to non-humans, then why would we expect AIs to treat us well, particularly as their capabilities approach or exceed ours?
🍈 This week, intellectual rabble-rouser, Alex Proud, declared himself vegetarian and argued that we needed to stop eating meat immediately
. Good read
. One part of the argument that our better understanding of consciousness blurs the special treatment we confer humans over say cows. It is buttressed by advances in AI and rise of a new class of non-human persons whose computational complexity may exceed ours in our lifetimes. Would we want those AIs to eat us or not? (Or just hope for the best?)