Ever since I began writing about technology for a living (and even before that), I’ve had non-tech people ask me if I think the Internet, and by extension almost any piece of new technology, is a good or bad thing.
My reply over the years has remained consistent, albeit slightly flippant: ‘I’m agnostic,’ I’d say. 'The truth is, this thing is only just getting started and no one knows how it’s gonna play out’.
The same can also be said regards Artificial Intelligence (AI). Not in the 'any startup with an algorithm’ sense of the word, but proper deep learning-based AI where, to put it over simply, computers use neural networks to teach themselves.
In previous ITKs, I’ve written about what I’ve dubbed an AI honeymoon
where AI is currently augmenting existing jobs before replacing them, and how I’m particularly bullish on the application of AI to healthcare.
However, a more pressing issue and one that has left me scratching my head is in relation to AI and accountability. Or, more accurately, the lack thereof.
That’s because when a computer that has taught itself makes a decision (as valid or useful as that decision is) there is no easy way to know how it came about. Unlike an algorithm designed by a human, certain forms of deep learning can not be easily reverse engineered or interrogated. This is already giving rise to what has been called AI’s blackbox problem. From MIT Technology Review
There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?
The article goes onto explain why deep learning powered by neural networks, a specific branch of AI, is winning the day despite a lack of explainability, and how this is giving rise to research into how to build some kind of feedback loop into AI so that, at least on a rudimentary level, you can ask a machine to explain why it came to a particular decision. See, for example, the DARPA program for XAI
However, another article in MIT Technology Review this week provides a fun counterpoint. Titled: ’Deep Learning Is a Black Box, but Health Care Won’t Mind
’, it argues that the healthcare industry, including regulators, won’t care if AI is a blackbox, as long as the results speak for themselves.
“In the case of black-box medicine, doctors can’t know what is going on because nobody does; it’s inherently opaque,” says Nicholson Price, a legal scholar from the University of Michigan who focuses on health law.
Yet Price says that may not pose a serious obstacle in health care. He likens deep learning to drugs whose benefits come about by unknown means. Lithium is one example. Its exact biochemical mechanism in affecting mood has yet to be elucidated, but the drug is still approved for treatment of bipolar disorder. The mechanism behind aspirin, the most widely used medicine of all time, wasn’t understood for 70 years.
Would you trust an AI more or less than an approved drug? Just imagine reading the leaflet with all those disclaimers. 'This AI is statistically proven to work but we have no idea how’ 😕
With that said, in an email exchange today, one founder of a UK AI startup summed up the current state of play: 'Frankly, it’s just that we are making so many incredible breakthroughs with machine learning that slowing down to work on the XAI problem simply doesn’t pay off’.
Or, to give you the tldr version, he says the problem with black boxes is that you can make money without solving them.
For now, at least.