View profile

Mostly Harmless AI

Mostly Harmless AI
By Alejandro Piad Morffis • Issue #2 • View online
🖖 Welcome to the second issue of Mostly Harmless AI! Every Sunday, I’ll send you a curated list of interesting bits about AI all around the spectrum, from cool new research and tools to news and discussions, and everything in-between.

🗞 What's new
Recent news about or related to the technological, scientific, and social aspects of Artificial Intelligence.
AI is slowly becoming an infrastructure of sorts, getting into the substrate of every field of science and technology, much like math has been for the last few centuries. This brings about a paradigm shift in how we think about science as a whole. This article discusses how a machine learning model used in several physics experiments raises questions about the process of scientific discovery itself. Instead of a well-understood underlying mathematical model, we can use black box models trained on large amounts of data to make very accurate predictions without completely understanding how those phenomena work. But, can we still call that science?
Stepping out of physics and into the realm of social sciences, we must ask the same kind of questions. This BBC article discusses new algorithms to automatically evaluate job applicants based on their CVs. This is a minefield of ethical problems, as several systems of this type have shown statistical biases against minority groups, but the fact remains that the industry is moving towards more and more automation. Next time you get rejected for a job application, how would you feel if you knew that no human ever read your application?
📚 For learners
Online resources for learners at all levels of expertise: online courses, YouTube videos, blog posts, free eBooks, novel research, and more.
If there is one resource out there you gotta check out is Papers with Code: a giant collection of state-of-the-art research papers, plus linked Github repositories and datasets! You can easily stay up-to-date in any subfield in machine learning just by checking their daily leaderboards. Also, you can download more than 3000 datasets that are used across thousands of SOTA papers, and either re-implement those models or try your own ones.
The latest in Machine Learning | Papers With Code
🔨 Tools of the trade
Apps, libraries, online services, and tools, in general, that you can use to solve AI problems.
This week’s highlighted tool is streamlit, a Python framework for quickly creating data apps. Streamlit is the answer to the question “what if data scientists needed no frontend skills to develop MVPs?”. It’s a pure-Python framework that turns plain old scripts into fully interactive web applications, with integrated data visualization libraries, and a powerful caching mechanism that will make it a breeze to deal with large datasets. All of this without requiring you to step out of Python even for a second.
Streamlit: The fastest way to build and share data apps
🍿 Recommendations
Podcasts, books, TV, and cinema; recommendations that will spark your interest in AI or make you think deeply.
If you don’t know Lex Fridman’s podcast, you’re missing some of the coolest conversations about AI, science, technology, and lots of other interesting topics.
One of the latest episodes that I’ve especially enjoyed was the one with Zev Weinstein. They touched basically all topics from the importance of science and philosophy to the meaning of life. If you’re interested in a more AI-focused conversation, check out the one with Charles Isbell and Michael Littman, or the one with Francois Collet, creator of Keras. And if you want to have your mind absolutely blown just listen to Joscha Bach on artificial consciousness. But really, all of them are amazing.
🎤 Word of mouth
Interesting conversations about AI happening all around social media, where you can go listen to others and share your thoughts.
One conversation that caught my eye this week at r/MachineLearning was regarding how useful is it to implement algorithms and models from research papers. When deciding how to balance your time to learn as effectively as possible, this decision can become important. On one hand, implementing already existing ideas lets you play with state-of-the-art stuff in a way that’s safe, but still challenging. On the other hand, you could be spending that time working on your own ideas. In the end, I personally think a healthy mix of both is the best strategy.
In the meanwhile on Twitter, you might find interesting this thread about handling imbalanced datasets by Vladimir (@haltakov). He lays out most of the standard methods for dealing with imbalanced problems. Don’t forget to also check other people’s comments, since they add a lot of insights.
Vladimir Haltakov
Machine Learning Interview Question #9 🤖🧠🧐

Machine learning interview questions are back!

❓ What is the problem with unbalanced datasets? Can you give an example? How do you deal with it? ❓

Answer in the replies. Read the rules 👇
Back to Reddit, and stepping away from technical issues for a minute, I found this thread about dealing with impostor syndrome, especially in a field like Machine Learning where there seems to be always more to learn. The most interesting comments seem to agree on one thing: we are all impostors of sorts. Check the thread, there is a lot of motivation to be found there when you feel like you’re low on self-esteem.
👥 Community
Interesting people from the AI community that you can follow, from big influencers to tiny accounts, but always people that are worth listening to.
If you’re interested in machine learning in general, and computer vision in particular, you should definitely follow Vladimir (@haltakov). You can expect all sorts of amazing ideas from him, from cool projects he’s working on to intuitive explanations on basic concepts of AI, to recommendations on difficult interview questions for your next machine learning job application.
A newer (in my timeline) but also extremely prolific machine learning advocate in Twitter is Jean (@Jeande_d). He’s always sharing hints and tips to improve your machine learning process, whether by suggesting cool tools to use or by diving deeper into some standard techniques you should try out.
☕ Homebrew
The latest bits of my own harvest: Twitter threads, blog posts, projects, videos, and any other piece of content I’m producing.
This week has been kind of slow on my side, at least with respect to social media, but I’ve been working non-stop on a new idea: starting a podcast! I haven’t fleshed out all the details yet, but I’m sure it’s gonna be about AI, and it’s gonna have the kind of deep insights that are too difficult to put on a Twitter thread.
If you’re interested in checking it out, you can subscribe anytime.
The Mostly Harmless AI podcast
👋 That’s it for now. Please let me know what do you think of this issue, what would you like to see more or less of, and any feedback you want to share. If you liked this newsletter, consider subscribing (in case you’re not) and forwarding it to those you love. It’s 💯 free!
Wish you all a Happy Valentine 💘!
Did you enjoy this issue?
Alejandro Piad Morffis

A weekly newsletter on all things AI, including recent news, hot resources, and interesting conversations happening all around the Internet.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue