View profile

Model Explainability with David Guedalia (CTO @ Blue dot)

Model Explainability with David Guedalia (CTO @ Blue dot)
By MirData.Report • Issue #3 • View online
It’s hard to overestimate the importance of understanding how your ML model thinks and acts. Being able to explain its results is that important.
You can find those topics a little too philosophical, but when it comes to customer support and company revenue, you’ll definitely want to have those answers.  
In this issue, we focus on Data Explainability as a tool to identify semantic confusion in Data Samples. And thus to have an educated decision on pushing for the model accuracy. 
Our expert today is David Guedalia, currently Blue Dot’s CTO with more than 20 years of experience in ML, NLP, and Big Data, focused on unsupervised learning. So when someone like David says “…today our models are so complex that starting at the output and looking at the big picture helps us understand those models and prevent overfitting…” you want to dig deeper and understand how those tools can impact your product and business results.
Without further ado, hope you will get as many practical insights from this interview as we did.

Two reasons why Data Scientists should care about ML model explainability
First, People in Customer Support need to know why our model fails. Their relationships with customers depend on it. Just quoting the probability percentages won’t help support professionals.
Second, explainability can help us improve model performance. To be able to push the accuracy up, we need to understand how the machine thinks. To do that, we look at a semantic map. And we can get a better idea of why the model did what it did.
Two reasons why Data Scientist should care about ML model explainability | David Guedalia (blu dot)
Two reasons why Data Scientist should care about ML model explainability | David Guedalia (blu dot)
Understanding inaccuracies of the ML model with the help of the semantic map
One of the ways to identify inaccuracies the ML model produces is to capture all data in a semantic tree and look at the hierarchy of information we get.
The goal of mapping is to help highlight semantic confusion between Data Samples. During the talk, David provided an example of such confusion and how Data Visualization helps Data scientists to increase the model accuracy.
Semantic mapping as a way to understand inaccuracies of the ML model | David Guedalia (blue dot)
Semantic mapping as a way to understand inaccuracies of the ML model | David Guedalia (blue dot)
Need more insights from people like David?
Subscribe to our Linkedin page. We publish new Data Science Talks every week!
Did you enjoy this issue?
MirData.Report

Latest information from the data science world

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue