A Microsoft chatbot went rogue on Twitter and started spewing Nazi epithets. It is an extremely helpful case study in outlining some of the key issues around the application of machine learning and AI to everyday tasks. What isn’t interesting is not that Tay, the bot, was taught to say rude things. What is interesting is what this tells us about designing AI systems that operate in the real world and how their learning can be hacked.
AIs are going to need to learn and interact somewhere akin to the real world. Equally, if we allow AI-systems unexpurgated access to the ‘real world’ while they are learning, there could be ramifications.
The first: we don’t have common ethical standards within societies or cultures nor across them. Philippa Foot’s trolley experiment never results in unanimity within cultures. (Click here
if you don’t know the Trolley problem.)
The second: we already have similarly poorly designed optimisation systems in the world. And they affect hundreds of millions of people, only they are less transparent and the people operating them are less responsive than Microsoft has been with Tay.
Examples including credit scoring algorithms or indeed complaints-handling protocols amongst large firms. Those algorithms are often simple regressions built-in small datasets. Or in the case of complaints protocols often very simple, inflexible decision trees applied by a human. Sure Tay is more advanced, nuanced and embarrassing but no real harm is done compared to a poorly designed decision tree which might determine your access to a financial product or insurance.
How many algorithmic approaches make it out into the real world without adequate testing or understanding of their ramifications or worse, their unintended consequences?
The third: Microsoft should have known better. In the experimental crucible that is the Internet, users will push and pull an experiment to its limits. Experience dictated that we, the collective, could have known this. ☠️ Vice asks some bot experts how to design a bot that doesn’t go 'haywire’.
I’m extremely surprised Microsoft didn’t do what DeepMind did with AlphaGo. In that case, DeepMind seemed to keep AlphaGo under wraps until they were reasonably clear that AlphaGo was going to do an amazing job in the field. Microsoft appears to have taken the opposite tack. Strikes me as weird. You can read their statement here
The fourth: It’s not really clear how much harm was done by Tay. It’s embarrassing for Microsoft on a corporate level but it’s really not clear that Tay influenced anyone to become more xenophobic or racist. But we have all had a great lesson in some of the issues around training algorithmic systems. (Remember Google and the gorilla
The fifth: Tay appears to support the notion of the 'unreasonable effectiveness of data’. That large amounts of training data determine the effectiveness of an AI-based system more so than the smarts of the algorithms.