🔥 The risk from AI; future of marriage; Uber; Apple & BMW hooking up++ Issue 20

We consider: what really is the present AI threat? What will AI ask of us? How does it make us consid
August 2 - Issue #20
The Exponential View
We consider: what really is the present AI threat? What will AI ask of us? How does it make us consider other ethical choices? Then on to sexuality and the future of marriage. Apple & BMW delayed hook-up. Uber’s phantom cabs.

It is summer, so I am free-styling a bit…
Recent subscriber? Share this (please!) Twitter | LinkedIn
Forwarded by a friend? Subscribe

Dept of the near future
🍎 Steve Jobs on how to build exponential products: ‘start with the customer experience and work back’ nice video
😬 Is the world really getting better? Perhaps in a narrow, linear sense but in a deeper meaningful sense it may not be, argues Umair Haque. Mega polemic
The singularity is nowhere near. We need to revise Kurzweil’s estimates of the singularity with a more recent estimate of the brains complexity. Tim Dettmers argues 2080, not 2035, is a better date. Hard read & (caveat) long, deep blog post covering computational neuroscience, neurogeography & deep learning written by a (smart) undergraduate student. Not peer reviewed.
😥 Uber has created a ’mirage of a market’ in order to stimulate a compelling user experience. Vice on this subject is also worth reading. Is this cheating the market or a necessary way to make the service work? Something is working as Uber closes a $1bn round at a $51bn valuation and hits that valuation milestone faster than Facebook.
😱 The ad-supported internet faces an existential moment when iOS 9 arrives bringing new ad-blocking capabilities. What trouble will that spell?
🚘 A more detailed look into Apple and BMW’s potential partnership for a car platform. Excellent Reuters reporting.
👪 How to raise kids in exponential times. Peter Diamandis’ view
Dept of AI threats & vegetarianism
You will have read the Future of Life Institute warning against an autonomous weapons arms race.. Predictably this was picked up as a warning against Terminator-like killer robots. But as we’ve read in several issues of AED, and as is argued in the singularity link above, we are likely still decades away from technology maturing sufficiently to pose a present existential threat.
However the argument made by the FLI is more nuanced: that the rise of autonomous system reduces the cost of going to war (in terms of the human costs borne by the aggressors). War or violence will thus become increasing attractive as a policy option.
AI Professor Stuart Russell likens AI to nuclear technology:
From the beginning, the primary interest in nuclear technology was the “inexhaustible supply of energy” … I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence.
I find this analogy useful. It is true the AI cat is, at some level, out of the bag. Drones will get more autonomous capabilities, and rules of engagement will change to encompass a perceived increase in the sophistication of the autonomous control logic. And it’s also the case that reducing the human and psychological costs of inflicting violence makes it a more appealing tool to inflict on others.
And we know that supervision, broad transnational agreement and mindfulness can successfully reduce the proliferation of new weapons technologies. The Nuclear Non-Proliferation Treaty is a good example of something that has been broadly successful for 50 years. So keeping the discussion alive and front of mind as the FLI has done seems like a reasonable step.
The terrifying scenario of an existential threat is very alluring. But it can also mask other interesting ethical questions that arise by a broadening of the processing complexity of artificial intelligence systems.
What happens as AIs get moderately smarter and more human-like? For example, might they achieve some notion of personhood? When might we consider the rights AI have? At what point might they have the right to vote?
The boundary is already being tested with our ideas of non-human personhood. If we apply personhood to non-humans, wouldn’t AIs be included? And if we don’t apply personhood to non-humans, then why would we expect AIs to treat us well, particularly as their capabilities approach or exceed ours?
🍈 This week, intellectual rabble-rouser, Alex Proud, declared himself vegetarian and argued that we needed to stop eating meat immediately. Good read. One part of the argument that our better understanding of consciousness blurs the special treatment we confer humans over say cows. It is buttressed by advances in AI and rise of a new class of non-human persons whose computational complexity may exceed ours in our lifetimes. Would we want those AIs to eat us or not? (Or just hope for the best?)
Dept of sexuality and gender
🎩 As gay marriage becomes legalised, do arguments in its favour not point to legalisation and acceptance of polygamy? Indeed, if we enter a transhumanist age, can traditional-notions of marriage make sense if we live 1,000 years or more?
👭 And so we might argue that sexuality is mutable and not fixed and it is time to abandon firm notions of sexuality, argues Anita Diamond.
The first trans-only modeling agency has launched in Los Angeles.
👏 Old but instructive: this interactive app lets you explore gendered language used in 14m reviews of University academics on RateMyProfessor. (Prepare to have your assumptions about students reinforced.)
Long reads for the beach
I hope you are all getting some time off relaxing. Here is a smorgasbord to enjoy with the long leisurely days of summer.
👑 The sad tale behind the .io domain name. Well written exploration of this cool tld and its tumultuous, racist colonial heritage.
💵 Goldman Sachs has become a venture powerhouse. It isn’t just chasing returns, but learning how to innovate.
🌳 Want to get smarter? Climb a tree. (Perfect holiday activity.)
👽 There may be more than 1,000,000,000 earths in our Galaxy alone.
Mother robots build kiddy robots in robotic evolution experiment.
Also watch dumb little robots self-organise. (Video)
🌈 Utterly stunning visual depiction of machine learning. Takes 5-10 minutes to appreciate.
💻 Fun, simple introduction to Moore’s Law. (It is a cartoon.)
Why vertical farming is taking off. (Nice simple intro.)
Google is doing great work getting deep-learning to work runtime on mobile devices. This post covers machine translation, but there is excellent work being done with on-device deep-learning-based machine vision too.
China’s largest coal supplier saw a 25% annual decline in sales.
Impossible EM drive propulsion confirmed by German scientists. Pluto here we come. Again. But faster.
What you built and wrote
Collaborative open computer science is here. Long-time reader, Samim Winiger, has put together GitXiv, a mash-up between computer science papers released or Arxiv and open-source projects supporting them on Github.
Reader, Tim Bradshaw, writes in the Financial Times that wifi-based smart home devices may be the gateway to more robot-like domestic devices.
Dept of thanks
Thanks to Conor Ogle, John Hendersen, Tina Asgari, Samim Winiger, Tim Bradshaw & Nick Perrett for recommending stories.
Did you enjoy this issue?
Thumbs up 1ae5a7bdfcd3220e2b376aa0c1607bc5edaba758e5dd83b482d03965219a220b Thumbs down e13779fa29e2935b47488fb8f82977fedcf689a0cc0cc3c19fa3c6bb14d1493b
Carefully curated by The Exponential View with Revue.
If you were forwarded this newsletter and you like it, you can subscribe here.
If you don't want these updates anymore, please unsubscribe here.