View profile

Mostly Harmless AI

Mostly Harmless AI
By Alejandro Piad Morffis • Issue #8 • View online
🖖 Welcome to another issue of the Mostly Harmless AI newsletter.
It’s been a while since the last issue, and the reason is I’ve been struggling with finding the motivation to write something worth of sharing. Don’t get me wrong, I have a ton of ideas (even I dare to say, one or two good ones), but most of them are better shaped for Twitter threads. Plus, there are already LOTS of great newsletters out there, both for long-form and short-form topics.
So I decided that if I wanted to give a shot to this newsletter, it had to be something slightly more personal, something that only could come from me. Thus, I will be focusing more on sharing my journey, the things I’m working on, the problems I’m concerned about. Of course, all of this is tightly related to AI, which remains to be my primary (well, secondary) love.
I hope these topics are something you find useful or at least mildly interesting. Most of it is mostly harmless, anyway.

🤬 Language models are full of biases
A couple threads ago I talked about computational language models. These are, in a nutshell, compressed representations of human language that assign a likelihood to every possible sentence. As a black box, you can imagine a language model as some kind of Python function that receives a sentence and outputs a number from 0 to 1, the higher the most likely that sentence actually “exists”. They are used anywhere we need a computer to deal with natural language: automatic translation, speech-to-text, search engines, There are many ways to implement something like this, and you can take a look at this thread for some ideas:
Alejandro Piad Morffis
Hey, today is #MindblowingMonday 🤯!

I want to tell you about Language Models, a type of machine learning techniques that are behind most of the recent hype in natural language processing.

❓ Want to know more about them? 🧵👇
Anyway, what I wanted to talk now is not that much about technical details, but rather about some problems that arise from the deployment of huge language models by big tech companies.
You see, language models are often trained in an unsupervised (or self-supervised) form, fed with massive chunks of text mined from the Internet. This is a very cool idea in principle, because we have access to a vast collection of human language where basically everything we know about can be found. GPT-3 and BERT are just two examples of very different language models trained of huge amounts of text (they are in completely different leagues, though, in terms of training data).
So, if you ask one of these language model the probability of a sentence like “Leonardo da Vinci painted the Mona Lisa” it should give it a very high score. However, if you ask it “Alejandro Piad painted the Mona Lisa”, the score should be close to zero. The reason is very simple, there are far more examples of the first sentence than the second in the Internet. The model doesn’t really know who painted the Mona Lisa, it just knows that many more people think it was Leonardo (keep in mind, though, that both you and me also reason like this a lot of times…)
Now, if only the Internet was a place where all that’s true was massively more common that what’s false. But it isn’t. It is full with conspiracy theories and fake news. So we must be careful in using frequency of something appearing in the Internet as a proxy for truthfulness.
The big problem, though, comes not from purposefully misleading stuff, but from the subtle biases that creep into all of our conversations. For example, what happens when you ask a language model “He is a programmer” vs “She is a programmer”? Naturally, both sentences should be exactly equal in terms of likelihood. But a carelessly trained LM will very likely give a higher score to the first one. Why? Because the Internet has many more examples of programmer boys than girls!
Why does this matter? In some applications this kind of biases pop up immediately. For example, if you Google translate a long paragraph including “she is a programmer” back and forth between English and a language without gender you can get “he is a programmer” back. But these are not the worst cases. You can use a language model with these biases as an internal component of another system, say, to evaluate candidates for job applications, or to assess the reliability of a legal claim, or to estimate if a person will forfeit a mortage, or to pre-screen papers submitted to a research journal. In these cases, you may have no idea how these biases are messing with the final prediction. As a very simplistic example, you could be rejecting women applying for programming jobs more often than men because their profile has less “fit” with the job description.
So here comes the mandatory discussion about “but that’s the real data!”. Yes, it is. And that doesn’t make it right. Reality is full of biases, full of wrong decisions, full of things we want to change. Letting those things creep into our models of reality unnoticed is a recipe for keeping ourselves in the place we are today, not in the place we want to be.
Now, there’s light at the end of the tunnel. Ethics and fairness is a big issue in the AI research community today. The most brilliant minds in our field our working in the detection and mitigation of these problems. The solution is not to vilify and stop using these technologies altogether. Language models are a very powerful tech that can boost some of the most interesting and useful applications of the next decade. The solution is to understand their limitations and deploy them with the necessary care in those scenarios where they’re most likely to cause harm.
📚 For learners
If you want to learn more about some of the biggest issues in AI ethics today, there is a wonderful book, The Alignment Problem, by Brian Christian. It goes way beyond language biases, into the realm of reinforcement learning and the value alignment problem.
The Alignment Problem: Machine Learning and Human Values by Brian Christian
🔨 Tools of the trade
If what you’re looking for is to play with pre-trained language models, use them in your own app, or fine-tune them in your own dataset, then what you want is the transformers library by huggingface.co.
Hugging Face – The AI community building the future.
👥 Cool people to meet
This week I want to recommend you to follow Talia. She’s working on some of the coolest research in the intersection of programming languages and software engineering. Plus, she is incredibly energetic and overall a very nice person to talk with about some of the most difficult topics we are facing today.
Another cool friend I made recently is Prashant. We’ve been talking a lot about some of the most intriguing philosophical questions around AI, consciousness, free will, you mention it. He’s very active on Twitter and he shares a lot of interesting bits and resources about machine learning. Plus, he loves books!
☕ Homebrew
Finally, this week I want to share with you a small project I did a few months ago, auditorium. It’s a slideshow generator based on the awesome reveal.js, which adds a Pythonic layer with which you can craft cool interactive slideshows with pure Python code. It’s a bit rough around the edges, though, so it would be very cool if you could take it for a spin and let me know what you think of it!
👋 That’s it for now. Please let me know what do you think of this issue, what would you like to see more or less of, and any feedback you want to share. If you liked this newsletter, consider subscribing (in case you’re not) and forwarding it to those you love. It’s 💯 free!
Did you enjoy this issue?
Alejandro Piad Morffis

A weekly newsletter on all things AI, including recent news, hot resources, and interesting conversations happening all around the Internet.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue