View profile

futuribile / curating futures - Issue #12 - #KillYourFuturologist

futuribile / curating futures
futuribile / curating futures - Issue #12 - #KillYourFuturologist
By Marta Arniani • Issue #12 • View online
Welcome back - or welcome tout court - to your favourite newsletter exploring the blind spots of tech. “Futuribile” stays for something that may happen in the future if certain conditions are met. For me, it is a call to work Today on the conditions enabling a more desirable future (implying: a fairer collaborative society + a decent planet to live in). I don’t believe in futurology and predictions. They do not really change anything if they don’t help understanding which are the sweet-spots for intervention, and formulating the right working questions.
Hope this newsletter can help you develop your own hypotheses and critical thinking, and look for your actionable points in the big picture.
Marta Arniani

History of the future
The magazine WIRED turns 25 years old. The intentions of the founding editor Louis Rossetto was to make a magazine “that feels as if it has been mailed back from the future”, and for most of its history WIRED kept up with expectations. Sometimes it has been prophetic (2003: predicting a “phonecam revolution”), and sometimes the future it envisioned never arrived (1999, next web revolution: sending smells through the internet 💩). In retrospective, what does the evolution of its predictions tell us about our perception of the future?
Looking back at WIRED’s early visions of the digital future, the mistake that seems most glaring is the magazine’s confidence that technology and the economics of abundance would erase social and economic inequality. (…) The first issue began by describing a typhoon no one else could see. Today, everyone sees it, and the magazine reports on the effects and movements of the storm. It still voices plenty of enthusiasm around the edges. But WIRED is no longer simply cheering the imminent arrival of the future. It seems to recognize that behind this patch of turbulence is probably another one. Enjoy the ride.
The future predicted by WIRED is like the neighborhood where we first meet Deckard in Blade Runner: a Western one, with Asian influences. Now that we have seen the limits of machines and speed idolatry (which BTW was formalised in the fascist primordial soup), shall we look at non-Western narrations of the future to regenerate ours? Quartz has produced a couple of very good pieces on Africa: the first, presents 30 African innovators, who cleverly remix heritage, technology and social innovation. The second, discusses efforts to value African philosophy. In a moment when the limits of individualism have shown all the collective harm possible, African thinking tradition, Afrofuturism and events like Afrotech Fest can be a resource for repurposing the Western approach to tech. In the end Ubuntu, the open source software operating system, has been named after the African concept of intending self-realization as a communal process. We can see technology as intersectional, constantly inspired to rethink its meaning and what it can achieve, thus reinvigorated by ever new and exciting insights:
It is from an Afrofuturist vantage point, in particular, that technology can become something more than just material. Or at least one of the ways this sort of thing can happen—after all, abstract things like legal codes, political institutions, philosophical movements, organized religions, and so on are also technologies, created to extend our reach through time as a means of keeping memories and knowledge alive. Afrofuturism sees these non-material technologies as key, but also includes race and gender as integral. To quote Ytasha Womack, “Afrofuturism views race as a technology, a man-made creation with power imbalances and seeks to heal this idea of separation in humanity.”That means Afrofuturism is all about viewing technology as revelation. At its heart, it’s about addressing imbalances — between groups of people, but also between humans and their environment.
"Purple Noise is a new global feminist movement whose goal is to noisify social media channels. Of course, Purple Noise is not a real feminist movement, it is fake news – and will hopefully soon  be as real as other fake news." (from the Purple Noise Manifesto)
"Purple Noise is a new global feminist movement whose goal is to noisify social media channels. Of course, Purple Noise is not a real feminist movement, it is fake news – and will hopefully soon be as real as other fake news." (from the Purple Noise Manifesto)
Blockchain ta mère
Can blockchain save journalism (read: bring back trust and revenues)? The startup Civil is betting on it by building a news media ecosystem backed by blockchain. So far engaged with small sized newsrooms, it recently landed a partnership with the Associated Press and launched an ICO, gaining big attention. Why is it worth following the story? A couple of hints. It is a very special governance case study (due to the nature of the news sector: plurality, transparence, accountability are even more important here); it is a rare experiment in selling tokens to unaccredited investors; it presents a collaboratory and non-regulatory approach to classic media crisis, quite focused at the moment in dissecting the EU Copyright Law. For the record, the AP already tried - with scarce results - in 2009 to track its content. Before the blockchain boom, the idea was to embed a tracking beacon into digital content.
In the history of technology, it is easier to find applications generating the infrastructure, than the opposite (planes before airports, light bulbs before electric grids, for instance). So why the blockchain community is focusing so much on platforms? It is a counterproductive myth, warns Union Square Ventures, one of the most influential venture fund investing in the blockchain space:
A common narrative in the Web 3.0 community is that we are in an infrastructure phase and the right thing to be working on right now is building out that infrastructure: better base chains, better interchain interoperability, better clients, wallets and browsers. The rationale is: first we need tools that make it easy to build and use apps that run on blockchains, and once we have those tools, then we can get started building those apps. But the history of new technologies shows that apps beget infrastructure, not the other way around.
The ID and data business
As of 29 September the European Union officially supports cross-border digital identification. Since Saturday, every EU Member State is obliged to comply with the Electronic Identification, Authentication and trust Services (eIDAS) regulation. As the name says, the new rule is composed by two parts:
  1. Electronic Identification (eID), which allows businesses and citizens to prove electronically that they are who they say they are, in order to access services or carry out business transactions online
  2. Trust services (electronic signature, seal, timestamps, but also website authentication certificates and electronic registered delivery service), which make electronic business transactions more secure.
In practice, this means for businesses saving a lot of money in identification regulations compliance. The EC estimates savings for 11 billion € per year. For citizens - especially expats - it will cut a lot of paperwork, allowing operations like opening a bank account anywhere in the EU without being physically present, and cross-border electronic transactions.
At a higher level, this is a way to reduce the fragmentation of the EU data market, and a public answer to the proliferation of blockchain-based private-led ID schemes. To echo the previous section, the challenge will be finding the right balance between the legal public infrastructure and private-led applications.
Fun fact: for once, Italy (ok ok, together with Germany), is the only Member State that completed the notification procedure necessary to qualify any national electronic identification means for mutual recognition. But! The Italian SPID is the first public digital identity system led by the private sector. One can say, why not? In the end in 2016 we sold to IBM for $150M the health data of 61M Italians (value: ~$610.7B, +60M of government funds) without asking for their consent. #MaiUnaGioia 🤦🏽‍♀️
Somebody said "consent"?
By now you have probably heard of the Facebook data breach affecting up to 90 million people worldwide. “This is the first big case for GDPR“ said Věra Jourová, the European justice commissioner. In August, a research paper of the University of Illinois at Chicago analyzed the many ways that hackers could abuse Facebook’s single sign-on tool:
But perhaps the most staggering finding in the paper is that people don’t necessarily need to have logged into third-party sites with Facebook to be exposed. Say, for example, you logged onto a website with the same email address that’s associated with your Facebook account. If an attacker tries to log onto that same website using Facebook’s Single Sign-On, the researchers found that some sites—including fitness app Strava—will associate the two accounts.
A British study on dark web marketplaces has estimated that the value of an individual’s Facebook credentials is as little as £3, and that the majority of someone’s online life can be purchased for a grand total of £744.30. 🤑
Commonly consent takes the form of ticking boxes to access online services. That’s a bad design. Actually, consent itself, is a distraction on the way to fairer data policies and privacy. In her latest interview with the Harvard Business Review, the philosopher Helen Nissenbaum describes the impossibility of totally transparent consent and calls for a stronger understanding of privacy’s societal value:
Take the Cambridge Analytica case. Very enlightened people complained, “Facebook shared the information without consent.” But was it really about consent? Based on all our behaviors, all the time, I promise you, if they had sought consent, they’d have gotten it. That’s not what outraged us. What outraged us was what Cambridge Analytica was doing, and has done, to democratic institutions and the fact that Facebook was so craven they didn’t care. Consent wouldn’t have mattered; it would have easily been attained.
In the ‘90 the circulation of pictures of children manifacturing Nike soccer balls raised awareness in the public opinion, leading the company to review the fairness of its supply chain. Assuming that the power of digital platforms comes from controlling demand, not supply, Ben Thompson suggests that a similar transparency tactic should be applied to the value tech giants extract from data:
The most important thing that regulators could do is force Facebook and Google — and all data collectors — to disclose their factory output. Give users the ability to see not simply what they put in — which again, Google and Facebook do (and which GDPR requires), but also what comes out after all of the inputs are mixed and matched.
AI dizziness
Imagine you want to optimise local public transports while making students sleep more (and thus get better results). Sounds like a good problem to be crunched by an algorithm, right? Yes and no:
The MIT algorithm had done all the city could reasonably ask. It had sorted through more possibilities than any human being could possibly contemplate. And it had come up with a solution no bureaucrat had ever mustered. But it was people who made the final call. People with competing interests and a mish-mash of motivations. This was a fundamentally human conflict, and all the computing power in the world couldn’t solve it.
The tragicomic account and learnings of a MIT algorithm flop which made everybody unhappy. Humanity was the elephant in the room of this experiment (and actually, if you put a real elephant in the room, AI gets confused). 🐘
The point is: public power can’t think of software as a simple mean. Software is policy, as the founder of Code for America Jennifer Pahlka explains. She looks at the US administration’s failure to reunite families separated at the border: it was impossible to execute the court order because of… the software used by border agents to register people. She highlights two lessons derived from her experience:
The first is that implementation is policy. Whatever gets decided at various times by leadership (in this case, first to separate families, then to reunite them), what happens in real life is often determined less by policy than by software. And until the government starts to think of technology as a dynamic service, imperfect but ever-evolving, not just a static tool you buy from a vendor, that won’t change.
The second lessons has to do with the tech community mentality towards the Trump administration. An increasing number of employees petitions is pushing major companies executives to disengage with government agencies involved with the enforcement of certain immigration policies. But stepping back isn’t enough:
Silicon Valley can’t limit its leverage over government to software. Software doesn’t have values. People do. If the people who build and finance software (in Silicon Valley and elsewhere) really want government that aligns with their values, and they must offer government not just their software, but their time, their skills, and part of their careers.
Human labour is a key component of this project, “Anatomy of an AI system”, portraying Amazon Echo as an anatomical map of human labor, data and planetary resources. Nerd readers, rejoice. 🤓
AI toolkit
The Machine Intelligence Garage Ethics Committee, chaired by Luciano Floridi, released an AI Ethics framework for companies. It consists of 7 principles, meant to be pragmatic. Most of them is common sense, like communicating clearly, or promoting diversity, equality and inclusion. Nonetheless, common sense is a rare virtue.
The behaviour of machine learning systems can be moody. DeepMind, the AI think tank of Google, articulated in a post a research framework for AI safety research. It outlines three avenues: specification (ensuring that an AI system is incentivised to act in accordance with the designer’s true wishes); robustness (ensuring that our agents stay within safe limits, regardless of the conditions encountered); assurance (continuous monitoring and adjustments after deployment).
This sound over complicated and you are starting now to get interested in AI? Try out A people’s guide to AI.
That’s all for this round! Share and recommend this newsletter to your entourage if you enjoyed the ride.
I am in Milan the next couple of weeks, get me involved in cool stuff!
Did you enjoy this issue?
Marta Arniani

A monthly newsletter at the intersection of technology innovation and social justice. Insights and news about technology impacts on society and how society can strike back.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue