View profile

Steve's ITK: Just getting started


Steve's ITK

April 30 · Issue #23 · View online
Steve's In The Know: Everything I published recently, commentary you won't find elsewhere, write-ups of events I attended or spoke at, and industry rumours.

DARPA's explanation of explainable AI
Opening thought: AI's blackbox problem
Ever since I began writing about technology for a living (and even before that), I’ve had non-tech people ask me if I think the Internet, and by extension almost any piece of new technology, is a good or bad thing.
My reply over the years has remained consistent, albeit slightly flippant: ‘I’m agnostic,’ I’d say. 'The truth is, this thing is only just getting started and no one knows how it’s gonna play out’. 
The same can also be said regards Artificial Intelligence (AI). Not in the 'any startup with an algorithm’ sense of the word, but proper deep learning-based AI where, to put it over simply, computers use neural networks to teach themselves.
In previous ITKs, I’ve written about what I’ve dubbed an AI honeymoon where AI is currently augmenting existing jobs before replacing them, and how I’m particularly bullish on the application of AI to healthcare.
And just a few weeks ago I wrote up Y Combinator President Sam Altman’s visit to London where he talked about the need for mass-retraining and a basic wage to help offset job displacement by AI, before humans and machines will eventually need to become one.
However, a more pressing issue and one that has left me scratching my head is in relation to AI and accountability. Or, more accurately, the lack thereof. 
That’s because when a computer that has taught itself makes a decision (as valid or useful as that decision is) there is no easy way to know how it came about. Unlike an algorithm designed by a human, certain forms of deep learning can not be easily reverse engineered or interrogated. This is already giving rise to what has been called AI’s blackbox problem. From MIT Technology Review:
There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?
The article goes onto explain why deep learning powered by neural networks, a specific branch of AI, is winning the day despite a lack of explainability, and how this is giving rise to research into how to build some kind of feedback loop into AI so that, at least on a rudimentary level, you can ask a machine to explain why it came to a particular decision. See, for example, the DARPA program for XAI (explainable AI).
However, another article in MIT Technology Review this week provides a fun counterpoint. Titled: ’Deep Learning Is a Black Box, but Health Care Won’t Mind’, it argues that the healthcare industry, including regulators, won’t care if AI is a blackbox, as long as the results speak for themselves.
“In the case of black-box medicine, doctors can’t know what is going on because nobody does; it’s inherently opaque,” says Nicholson Price, a legal scholar from the University of Michigan who focuses on health law.
Yet Price says that may not pose a serious obstacle in health care. He likens deep learning to drugs whose benefits come about by unknown means. Lithium is one example. Its exact biochemical mechanism in affecting mood has yet to be elucidated, but the drug is still approved for treatment of bipolar disorder. The mechanism behind aspirin, the most widely used medicine of all time, wasn’t understood for 70 years.
Would you trust an AI more or less than an approved drug? Just imagine reading the leaflet with all those disclaimers. 'This AI is statistically proven to work but we have no idea how’ 😕
With that said, in an email exchange today, one founder of a UK AI startup summed up the current state of play: 'Frankly, it’s just that we are making so many incredible breakthroughs with machine learning that slowing down to work on the XAI problem simply doesn’t pay off’.
Or, to give you the tldr version, he says the problem with black boxes is that you can make money without solving them.
For now, at least.
Things I wrote
Huddly raises $10M to “reinvent the camera” with a computer-vision platform for video meetings
Telegraph Media Group acquires UK exam preparation app Gojimo
AIDoc Medical raises $7M to bring AI to medical imaging analysis
Banking app Pockit picks up further £2.9M as it readies new remittance service
HR and employee benefits platform Hibob raises $17.5M led by U.S.-based Battery Ventures
Babylon Health raises further $60M to continue building out AI doctor app
Flux, a fintech startup founded by ex-Revolut employees, wants to make paper receipts obsolete
Online grocery platform Farmdrop raises £7M Series A led by Atomico
Closing thought: It was the O'Hear wot won it
I’m joking, of course, and definitely not taking credit for this one. However, following my TechCrunch article calling out the lack of disability diversity reporting by the major tech companies, Slack has made good on its promise to include persons with disabilities in its most recent diversity report. My colleague Megan Rose Dickey has the scoop:
Tech companies rarely, if ever, include information about how many people with disabilities they employ. Today, Slack is changing things up. According to the company’s latest diversity report, 1.7 percent of its employees identify themselves as having a disability. 
As TechCrunch’s Steve O’Hear noted, tech companies are generally hesitant to discuss disabilities. Slack, however, was rather open in its dialogue with O’Hear at the time about including that information in future diversity reports, as long as the company followed legal processes and employees were willing to share it. Good on Slack for following through.
I couldn’t have said it better myself, although, let’s be clear: 1.7 per cent is shockingly low. No wonder Silicon Valley doesn’t want to talk about it! 😎
Get in touch
Want to continue the conversation? Just hit reply to this email – I answer every single ITK email I receive.
Please forward this newsletter to friends and colleagues who might also enjoy it. More subscribers and better open rates makes me happy.
Till next time,
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue