“How will machines know what we value if we don’t know ourselves?
” asks John Havens. (Recommended).
It is a difficult and important point. Ultimately, the AI systems we are designing are optimised to some goal. Today’s basic machine learning implementations might optimise recommendations on an e-commerce site but only maximise the retailer’s profit, not the temperance of the browsing human. Or our emissions control systems might optimise to maximise the automaker’s profit rather than social welfare.
More complex systems will be making more open-ended decisions. Let’s assume these systems are not maliciously or idiotically designed. Idiotic design might be, for example, the autonomous bus that is optimised for punctuality rather than actually delivering passengers and determines it is quicker not to stop to pick them up or drop them off. (Weirder computer systems have been designed.)
So even if
- we have systems which are well thought through, by exceptional observation of human behaviour
- we manage to encode the nuance of human and societal judgement into some objective function (ha, ha, not easy),
we’ll run into the problem that we (as humans) don’t agree what the ‘correct outcome’ is in a large number of situations. Those situations are the domain of ethics and ethical thinking.
In some respects the ethical issues surrounded AI are less about killer robots (although we are deploying thousands of killer-capable drones 24x7 to fight wars remotely) but about how we enumerate our ethics in a way that can be translated to the AI systems that may form the future interface to our ordinary interactions with the services we need to use daily.
Exploring Polyani’s Paradox
that 'we know more than we can tell’ and how that may or may not be a blocker in designing AI systems. (Don’t agree with this author really.)