View profile

Mostly Harmless AI

Mostly Harmless AI
By Alejandro Piad Morffis • Issue #4 • View online
🖖 Welcome to the fourth issue of Mostly Harmless AI! Every Sunday, I’ll send you a curated list of interesting bits about AI all around the spectrum, from cool new research and tools to news and discussions, and everything in-between.

🗞 What's new
This week I want to share with you two interesting articles from MIT Technology Review. This is a great source to stay up-to-date with science news all over the world.
The first article is about less-than-one-shot learning. Yes, that is a thing. The basic problem is this: in a machine-learning classification system, as the number of classes grows, you often need an exponential increase in the number of training examples in order to learn effectively to differentiate them. Humans are different, we can learn from very few examples, even one. But we can go even further: if I tell you a unicorn is something like a horse but with a rhino’s horn, I don’t even need to show a unicorn for you to be able to classify it if you ever where to find one in the wild (please, do send me pictures). This article is about precisely this, how to train a machine-learning model with fewer examples than the total number of classes (even two of them). It’s still very much a theoretical framework, but the potential applications are mind-blowing.
The second article is about model explainability, and how, when done without too much care, it can actually make things worse. The crux of the problem is that users can develop overconfidence in an AI system if we make it explainable, even when the explanations are not completely understood, and even when the prediction itself is wrong. The article suggests some ideas to tackle this issue, that is, to make models explainable in ways that actually help non-expert users to quickly detect when the model is making wrong predictions, such as providing explanations in natural language.
📚 For learners
The suggestion for this week is quite comprehensive. Here’s a complete Computer Science curriculum fully based on (mostly) free online content, that you can take today. This is a Github repository that provides links to all the resources necessary for 2 years worth of computer science education, mostly based on freely available MOOCs (although some of them charge for access to assignments and/or official certifications). I’ve checked the curriculum myself, and I do agree it covers most of the content you could expect to get in any college-level degree in the same amount of time. And it’s free!
🔨 Tools of the trade
If you’ve ever done some machine learning, and you come from a development background, you’ve probably stumbled with the huge problem of lacking tools to debug and understand what you’re doing. Weights & Biases is an online platform just for this. You just add a one-liner to your model training code, and you’ll start logging a bunch of information that they’ll then organize and present to you in a beautiful dashboard with a ton of visualizations, to help you understand everything from your training performance to your features to your hyperparameters.
Weights & Biases – Developer tools for ML
🍿 Recommendations
This week I’ve been reading Joscha Bach (@plinz) book on artificial consciousness, Principles of Synthetic Intelligence. It’s a heavy book. Joscha dives right into the problem of how to define consciousness, and how to emulate it on a computational architecture. It gives you a quite deep review of the most relevant ideas so far and then moves on to his own proposal for a cognitive architecture. I’m half-way through the book and loving it.
Principles of Synthetic Intelligence: Psi: An Architecture of Motivated Cognition by Joscha Bach
🎤 Word of mouth
Yesterday I hosted yet another AMA and you all shot some amazing questions at me! We discussed everything from practical advice to replicate experiments to how neural translation works to the possible solutions to the Fermi paradox, and beyond!
Alejandro Piad Morffis
Hey folks 🖖!

🎙️Today is Saturday again, and that means... we're doing yet another AMA!

Ask me anything about computer science, artificial intelligence, or, whatever... In the highly likely case I don't know the answer, I'll try at least to point you to someone who does.

Go 👇
Another very interesting discussion that I was involved in was regarding Twitter’s new ideas for premium content. Here is one tweet that led to some interesting thoughts:
Alejandro Piad Morffis
I think it's great that Twitter is trying new formulas to help creators grow. Personally, I don't see myself paying for a "premium Twitter experience" in the near future, and I don't see myself charging for anything mine I consider worthy of sharing. But that's a personal choice.
And here is a follow-up discussion started by Santiago (@svpino) that led to even more ideas and some suggestions on how to capitalize on that model without selling our souls, so to speak.
Santiago
Imagine your favorite creator in Twitter starts offering the following:

1. A weekly newsletter
2. Deep dives into your favorite topics
3. A look behind the scenes
4. Live discussion invitations
5. Unfiltered exclusive content

$4.99/mo

Would you subscribe?
👥 Community
This week I want you to follow three awesome folks I’ve whose thoughts I’ve been enjoying for some time.
My first suggestion is Elizabeth (@ElizabethDGroot), a Pythonista and data scientist, who often tweets about resources (from her own blog and other places around the Internet) for avid learners.
My second suggestion is Jordan (@DivineOmega), a full-stack developer mostly focused on PHP, who’s also interested in machine learning, and cool ways of mixing these two apparently separated worlds together.
And my third suggestion is Karl (@karlhigley), an expert in recommendation systems (he has worked at some of the big names out there), who’s always recommending (see what I did there?) cool things to read and learn about RecSys and beyond.
☕ Homebrew
As I told you last week, I’m working nonstop on ideas for my podcast. I found out the hard way that it’s gonna take longer than I expected to get it up and running. I should have expected it, but nevertheless, I’m not stopping anytime soon. In the meantime, I’ve made my first draft available, and I’m eager to get your comments and reviews on it!
On another topic, I started working a couple of weeks ago on a very weird idea, an introductory course to Python programming for data scientists that is 100% interactive. It’s based on streamlit, and you can see what’s done of the first lesson online.
👋 That’s it for now. Please let me know what do you think of this issue, what would you like to see more or less of, and any feedback you want to share. If you liked this newsletter, consider subscribing (in case you’re not) and forwarding it to those you love. It’s 💯 free!
Did you enjoy this issue?
Alejandro Piad Morffis

A weekly newsletter on all things AI, including recent news, hot resources, and interesting conversations happening all around the Internet.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue