View profile

futuribile / curating futures - Issue #20 / The devil wears optimisation

Aloha, I have to admit that in the increasing appropriation and marketing of the "tech for good" deba
May 30 · Issue #20 · View online
futuribile / curating futures
I have to admit that in the increasing appropriation and marketing of the “tech for good” debate by big tech, “tech for evil” approaches become very appealing. For instance, we can look into a desirable evolution of the technology presence in our life by speculating on the negative effects of human-computer-interaction, as the CHI4Evil research workshop did recently. Sometimes the answer to such reverse-engineering efforts is straightforward: this paper concludes that the only defense against killer AI is not developing it. “If AI systems are effective, pressure to increase the level of assistance to the warfighter would be inevitable. Continued success would mean gradually pushing the human out of the loop, first to a supervisory role and then finally to the role of a “killswitch operator” monitoring an always-on LAWS.” Out of military-Armageddon flavour, it is quite evident that automating our decision-making processes will make them less aligned with the human notion of collective interest. We wrote about it here.
Meanwhile, last week 42 countries signed the OECD Principles on Artificial Intelligence, which has the merit of being the first set of intergovernmental and international policy guidelines on the hot topic of “good/trustworthy/fair/insert-keyword-here” AI. Like for many documents on the topic, action points are missing. To end on a positive note, here you can find a clever and well-documented reflection of what “AI for social good” could be, and how it could be linked to sustainable development goals (so that we optimise some international effort).
Marta Arniani

Samsung researchers have released a model that can generate faces in new poses from just a single image/frame. I guess we'll see a new season of historic deep fakes.
Samsung researchers have released a model that can generate faces in new poses from just a single image/frame. I guess we'll see a new season of historic deep fakes.
The banality of FemTech
FemTech reached a 1bn$ in funding between 2015 and 2018, and projections see a raise to $50bn in 2025. FemTech targets women, but instead of pink-coloured products it provides solutions to female health (with a strong focus on fertility). We have quickly ended up in a situation where “male investors talking about periods” is flagged as a success. It is a matter of fact that women are invisible in many treatments and that certain tools haven’t evolved in ages because they are solely for the use of women and uninteresting to male investors and innovators. Bridging the gender gap in health is a priority. Nonetheless, my take is that seeing this a battle of sexes narrows down a huge set of opportunities. I am much in favour of FemTech if that means improving - by diversifying - health tools and data sets, thus improving the quality of care. Or if it means generating a debate about how much health in the tech sphere has been narrowed down to self-tracking. FemTech has a big opportunity to handle the sovereignty around our body in a different way than what has been done so far. To fight the overarching culture of quantification. We have been used to think that quantifying = solving, and it is undeniable this is very important on the research side for prevention. But on the citizen side, quantifying doesn’t say anything about the quality of what we quantify, and whether we are left alone dealing with it. If femtech becomes another mean to exploit some more data and frame femininity within the ultimate goal of pregnancy, investors can save their money.
Drawing the line
According to any operations based on data, who we are is static and predictable. In the eyes of the organisations interacting with us through data – from public powers to private service providers – our contradictions and evolutions are just background noise. Ghanaian-American philosopher Kwame Anthony Appiah speaks of a  ‘Medusa Syndrome’, writing that ‘what the state gazes upon, it tends to turn to stone.’ This is the only way a state has of making its people legible or, in other words, of ‘watching’ its population. Check out this great read about the implications of top-down limits to being our true and fluid selves.
Turning away, building a sort of user-driven splinternet, is a growing trend. Kickstarter co-founder Yancey Strickler suggests the internet is becoming a space of dark forests. Real conversations are retreating in more privatised channels:
“These are all spaces where depressurized conversation is possible because of their non-indexed, non-optimized, and non-gamified environments. The cultures of those spaces have more in common with the physical world than the internet. (…) The dark forests grow because they provide psychological and reputational cover. They allow us to be ourselves because we know who else is there.”
In the quest for a sweet spot between our public face operations and personal space preservation, personal data protection has come to symbolise privacy. But is it enough? This article suggests that Facebook is trying to make the word “private” meaningless:
“Facebook does not need to see the content of what people are saying in order to advertise to them. The metadata — who, or what (as in a business), you’re talking to, and even where you are or what time the conversation is taking place as it comes together with other pieces of information — provides more than enough information to make a very educated guess about what you’re interested in, to the point that knowing specifically what you are saying adds almost nothing.”
Here you have a great account of what identity deduced by your online presence can look like. And here of all the busy conversations your iPhone has with third parties while you sleep.
Beware of who owns the room you are retiring to.
The locomotion
Matthew Brennan
Chinese phone cradle for boosting your phone's daily step count. Some insurance companies in China allow people who consistently reach a certain daily step count to get discounted health insurance premiums.
Learn how the BBC successfully drove equal gender representation in newsrooms with the 50:50 project. #ItSeemsImpossibleUntilItsDone 🤙🤙
Ever thought about the environmental footprint of playing Despacito on YouTube or watching Games of Thrones?
How bold educational programmes are making Finland stand in the misinformation battle.
DJ Steve Aoki new singularity-fan comic book. For more education, check out We need to talk, AI, a comic essay on artificial intelligence.
Meet Eva, the Canadian cooperative alternative to Uber.
“Lisa comes in for an interview. All the interviewers judge her objectively, based on her qualifications and the candor of her responses. This leaves her so confused that, on the way out of the office, she accidentally walks into traffic and dies.” (From Examples of Toxic Femininity in the Workplace)
Oh, dear!
"Will you be my girlfriend/boyfriend? Yes / No / I am not a robot"
"Will you be my girlfriend/boyfriend? Yes / No / I am not a robot"
That’s all, thanks for reading this monthly excursion in tech culture and social impact. Hit reply for feedback, forward to share.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue