This week I want to share with you two interesting articles from MIT Technology Review
. This is a great source to stay up-to-date with science news all over the world.
The first article is about less-than-one-shot learning
. Yes, that is a thing. The basic problem is this: in a machine-learning classification system, as the number of classes grows, you often need an exponential increase in the number of training examples in order to learn effectively to differentiate them. Humans are different, we can learn from very few examples, even one
. But we can go even further: if I tell you a unicorn is something like a horse but with a rhino’s horn, I don’t even need to show a unicorn for you to be able to classify it if you ever where to find one in the wild (please, do send me pictures). This article is about precisely this, how to train a machine-learning model with fewer examples than the total number of classes (even two of them)
. It’s still very much a theoretical framework
, but the potential applications are mind-blowing.
The second article is about model explainability
, and how, when done without too much care, it can actually make things worse
. The crux of the problem is that users can develop overconfidence in an AI system if we make it explainable, even when the explanations are not completely understood, and even when the prediction itself is wrong
. The article suggests some ideas to tackle this issue, that is, to make models explainable in ways that actually help non-expert users to quickly detect when the model is making wrong predictions, such as providing explanations in natural language