A Microsoft chatbot went rogue on Twitter and started spewing Nazi epithets. It is an extremely helpful case study in outlining some of the key issues around the application of machine learning and AI to everyday tasks. What isnât interesting is not that Tay, the bot, was taught to say rude things. What is interesting is what this tells us about designing AI systems that operate in the real world and how their learning can be hacked.
AIs are going to need to learn and interact somewhere akin to the real world. Equally, if we allow AI-systems unexpurgated access to the âreal worldâ while they are learning, there could be ramifications.Â
The first: we donât have common ethical standards within societies or cultures nor across them. Philippa Footâs trolley experiment never results in unanimity within cultures. (
Click here if you donât know the Trolley problem.)
The second: we already have similarly poorly designed optimisation systems in the world. And they affect hundreds of millions of people, only they are less transparent and the people operating them are less responsive than Microsoft has been with Tay.Â
Examples including credit scoring algorithms or indeed complaints-handling protocols amongst large firms. Those algorithms are often simple regressions built-in small datasets. Or in the case of complaints protocols often very simple, inflexible decision trees applied by a human. Sure Tay is more advanced, nuanced and embarrassing but no real harm is done compared to a poorly designed decision tree which might determine your access to a financial product or insurance.Â
How many algorithmic approaches make it out into the real world without adequate testing or understanding of their ramifications or worse, their unintended consequences?
The third: Microsoft should have known better. In the experimental crucible that is the Internet, users will push and pull an experiment to its limits. Experience dictated that we, the collective, could have known this. â ïž Vice asks some bot experts
how to design a bot that doesnât go 'haywireâ.Â
Iâm extremely surprised Microsoft didnât do what DeepMind did with AlphaGo. In that case, DeepMind seemed to keep AlphaGo under wraps until they were reasonably clear that AlphaGo was going to do an amazing job in the field. Microsoft appears to have taken the opposite tack. Strikes me as weird.
You can read their statement here.Â
The fourth: Itâs not really clear how much harm was done by Tay. Itâs embarrassing for Microsoft on a corporate level but itâs really not clear that Tay influenced anyone to become more xenophobic or racist. But we have all had a great lesson in some of the issues around training algorithmic systems. (Remember
Google and the gorillas?)Â
The fifth: Tay appears to support the notion of the 'unreasonable effectiveness of dataâ. That large amounts of training data determine the effectiveness of an AI-based system more so than the smarts of the algorithms.