Leading investors, A16Z, argue we aren’t in a bubble. We investigate how we should treat our robots and AI helpers, and get a glimpse at how these systems dream and understand the world. A bit AI-heavy this week - hope you enjoy.
How should you treat your artificial intelligence programs & assistants? Is it ok to be mean to them? I’ve been playing around with Amy, an AI that helps you manage your calendar. I found that when dealing with her I was more abrupt than I might be with a human. Why was this? Was I testing the limits of the programming? Was I abrupt because I knew she/it was code?
This week, X.AI extended access to Amy for me, and I have been using her repeatedly to schedule calendar appointments for me. Amy has been designed to be very polite and helpful. In return, I am respectful, polite and business-like back to her. The affordances of the design (smart, helpful conversation) have anthropomorphized her. I’m am nudged (by the product design) and encouraged (by the quality of interactions) to treat her like a person.
But why am I? I’m not polite or deferential to our microwave ovens or toasters or even mobile phones? Call this the emerging area of anthroporoboethics - what is the ethical framework for humans dealing with AIs.
In a previous week, we touched on how deep learning models were helping us understand how we perceive reality. (See algorithms of the mind.)
This week, Google’s machine vision group blew us away with the stunning images pulled from inside a deep learning network, which summarised, in some sense, the deeper awareness of these primitive neural networks.
🐹 The word abstractions generated by word2vec (a shallow-learning algorithm) are amazing. They draw out the underlying semantic relationships between similar concepts fed in to the algorithm via simple algebra. (eg. “politics - lies = Germans”) first half recommended, second half is about implementation.