That’s “good morning” in Shona
Twitter was originally a text-based social media network.
In its early days, users had to rely on third-party apps like Tweetphoto to upload and share their images. Tweetphoto later became Plixi and was eventually acquired by Lockerz.
Over time, Twitter has been re-engineered to handle images natively. But every once in a while, its users experience the kind of glitches that remind everyone of Twitter’s origins — and that’s as a text-based social media network.
Doesn’t see colour
Twitter timelines are constrained spaces, hence the 280-character limit. To fit images of different dimensions into user timelines, it uses previews.
And to determine what part of an image should be contained in the preview, Twitter uses, or should I say used, a cropping tool that works with an algorithm called a saliency model.
According to Twitter:
“Saliency models are trained on how the human eye looks at a picture as a method of prioritizing what’s likely to be most important to the most people.”
But Twitter users started noticing that the model, while highly effective as a cropping mechanism, had some significant biases.
In a tweet, he posted two photos of the then-Senate Majority Leader Mitch McConnell and Barack Obama. In one image, Obama was featured at the top, in the other, he was at the bottom. However, Twitter’s algorithm cropped both images to show only McConnell in the previews, indicating that it found him more “salient” than Obama.
When experiments like this got to Twitter’s attention, the company ran experiments of its own and came up with two key findings:
Demographic differences: In a lineup with both black and white people, white people were often cropped ahead of black people. And in a lineup of people of both genders, men were often picked ahead of women.
Male gaze: The cropping tool picked a woman’s chest or legs as a salient feature.
#2 is exactly why we need more women in tech. But I digress.
Following its findings, Twitter has now suspended the use of its cropping tool.
about why, because of the bias of today’s most widely-used facial recognition tools, it’s too early to deploy the technology in sensitive use cases like law enforcement.
In searching for a solution, I came across this article
on making machine learning (ML) algorithms “fairer”.
“Researchers have developed technical ways of defining fairness, such as requiring that models have equal predictive value across groups or requiring that models have equal false positive and false negative rates across groups.”
In the example of the Twitter saliency algorithm, there was a 4% difference in favour of white individuals, that difference should be improved until it tends to 0%.
The challenge for the entire ML community is, therefore, to find ways to do this without compromising accuracy.
But the challenge for the African ML community, in particular, is to lead this development or at least play an active role.
Why can’t the ML equivalent of Lockerz be developed here?
After all, being the continent with the largest population of black people, the algorithmic unfairness affects us more than anybody else.