View profile

futuribile / curating futures - Issue #25: Who If Not We Should at Least Try to Imagine the Future of All This?

Revue
 
Aloha, For a series of casual circumstances, in 2012 I came into possession of a poster with the sent
 
November 7 · Issue #25 · View online
futuribile / curating futures
Aloha,
For a series of casual circumstances, in 2012 I came into possession of a poster with the sentence in this issue headline. It has been exposed in all the places I have lived in ever since. It comes from a book published for the 2004 European visual art programme “Think forward”. In the words of the main curator: “If not we, then others will, and we run the risk that such a future would not necessarily be shaped around our own hopes and dreams.”
In days of good news from Toronto* and bad news from Barcelona** who if not we can join forces in orchestrating alternatives? The need to organise collective imagination is my favourite takeaway from the Decode conference, and well before that, the fundamental motivation of this newsletter. Without a larger vision, tools will be tools and big tech the default choice.
Marta Arniani
*Sidewalk Labs had to considerably resize its surveillance-lab ambitions. An open letter from Waterfront Toronto Board Chair outlines key amendments to the plan. They include reducing considerably the surface of the “smart” experiment (from 190 to 12 acres), and eliminating the Urban Data Trust proposal (which was quite à la Google) in order to replace it with compliance to existing and future regulative frameworks.
**The central government made a big push against the technologies utilised by Catalonia independents. Indeed, it made Microsoft-owned Github remove the APK of Tsunami, an app for organizing political protests, and banned the use of decentralised identity management systems. Cherry on top, it ruled it can shut down digital services without a court warrant as a “threat of public order”.

"Learning to see" is an ongoing series of works that use state-of-the-art machine learning algorithms to reflect on ourselves and how we make sense of the world.
"Learning to see" is an ongoing series of works that use state-of-the-art machine learning algorithms to reflect on ourselves and how we make sense of the world.
Environmental intelligence
What if if an AI acting as “environmental manager” were to transform the planet regardless of human interest?
“A central argument made by those encouraging the uptake of AI, is that data driven systems can depoliticize or neutralize decision making. Extended to the context of ecosystems, this could imply that ecological agendas are prioritized over human goals (and over the status quo where human systems of production are preserved at the expense of everything else).”
Asunder proposes and simulates future landscape design strategies, like cities being relocated or simply removed, forests planted or lithium mines transferred to technological production sites. It would be somehow poetic to see artificial and nature intelligence joining forces to get rid of us. And, I suspect, a bit trash. Like a sharknado utilising surveillance cameras to detect humans. 🌪️🦈🤖
This is where you may want to be introduced to the concept of “Deep adaptation”:
The purpose of this conceptual paper is to provide readers with an opportunity to reassess their work and life in the face of an inevitable near-term social collapse due to climate change. The approach of the paper is to analyse recent studies on climate change and its implications for our ecosystems, economies and societies, as provided by academic journals and publications direct from research institutes. That synthesis leads to a conclusion there will be a near-term collapse in society with serious ramifications for the lives of readers. The paper reviews some of the reasons why collapse-denial may exist, in particular, in the professions of sustainability research and practice, therefore leading to these arguments having been absent from these fields until now.
(thanks Zoe for the suggestion!).
For the restless pragmatics, here you have a list combining a variety of useful datasets (including Earth system data, socioeconomic and citizen science ones) to create analytical insights and knowledge for a sustainable future.
AI in 2019: a visual review
AI in 2019: A Year in Review
AI in 2019: A Year in Review
Shots
#1 A critique of the untouchable Mazzucato.
#2 Evidence on how a pedestrian in self-driving Uber crash probably would have lived if braking feature hadn’t been shut off (or also: why we shouldn’t completely abdicate to machines)
#3 Recommendation systems are slow to adapt to preference changes, creating a ‘barrier to exit’ in the process of changing your mind: a paper (and here a recap) on human self-determination and the power dynamics between humans and algorithms.
#4 Try out the MIT courtroom algorithm game to experiment how hard it is to compute fairness (so maybe it shouldn’t be done?).
#5 Saved by fashion: Adversarial T-shirt is the first successful adversarial wearable to evade detection of moving persons. 😎
#6 How artists and fans stopped facial recognition from invading music festivals - piece by Tom Morello and Evan Greer.
#7 A good reason to rebel to slackification!
#8 Black Software: The Internet & Racial Justice, from the AfroNet to Black Lives Matter (book, in case you want to do me a Christmas present)
#9 A 148-year-old news outlet just gained the nonprofit status: maybe firmly positioning as public service is the future of newsrooms? 🙏🏽
#10 Data Voids: where missing data can easily be exploited by manipulators eager to expose people to problematic contentincluding falsehoods, misinformation, and disinformation.
Oh, dear!
———————————————————————————-
That’s all for this time! Board new subscribers to support my work, hit answer for feedback.
Aloha,
Marta
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue