💪 From the big boys
Backchannel run a rare piece on
how Apple uses machine learning. It states that a 200mb software package runs on the iPhone encompassing “app usage data, interactions with contacts, neural net processing, a speech modeler and a natural language event modeling system”. I’ve held the view for a while now that today’s AI techniques and infrastructure will re-open a class of historically intractable problems while also enabling us to rethink how products and features should be designed. Apple seem to think the same: “Machine learning is enabling us to say yes to some things that in past years we would have said no to. It’s becoming embedded in the process of deciding the products we’re going to do next.”
Salesforce announced their internal umbrella AI initiative, modestly called
Einstein, which will go on to power many of the company’s cloud services, as well as expose AI tools to end users. The team of 175 data scientists includes talent from acquired startups MetaMind, PredictionIO and RelateIQ. The company’s flagship event, Dreamforce, will attract 170k people into SF next week.
Six of the most powerful technology companies have set up the
Partnership on AI as a non-profit aimed at advancing public understanding of AI and formulate best practices on the challenges and opportunities within the field. An important catalyst to this end will undoubtedly be the continuation of open source technology development, which
Seldon’s founder
articulates in this piece.
🌎 On the importance and impact of AI on the World
Stanford’s
100 year study on AI published their first report. It finds “no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future”. From a public policy perspective, it recommends to:
- Define a path toward accruing technical expertise in AI at all levels of government.
- Remove the perceived and actual impediments to research on the fairness, security, privacy, and social impacts of AI systems.
- Increase public and private funding for interdisciplinary studies of the societal impacts of AI.
👍 User-friendly AI
UC Berkeley
announced a new Center for Human-Compatible AI to study how AI used for mission-critical tasks act in a way that is aligned with human values. One enabling technique is inverse reinforcement learning, where an agent (e.g. robot) can learn a task by observing human actions instead of learning to optimise a task on its own.
Designer
Ines Montani makes the case for
how front-end development can improve AI. Music to my ears! I take the view that although AI can be used to solve fascinatingly complex problems, wrapping a service with an API for others to dream up the most powerful use case isn’t the path to building a valuable company. Instead, one should productise technology with user-centered design as a top priority. Ines walks through how design can “improve the collection of annotated data, communicate the capabilities of the technology to key stakeholders and explore the system’s behaviours and errors.”
💻 AI running at scale
Ten years after the original release of Google Translate, the Google Brain team announce a new state of the art
Neural Machine Translation System (
paper here). The system takes the entire text to be translated as an input to a recurrent neural network instead of breaking the input sentence into words and phrases. The network pays attention to a weighted distribution over the encoded input vector (i.e. Chinese word) most relevant to generate output word (i.e. English word). Of note, the Chinese to English Google Translate service is 100% machine translation based, producing 18 million translations per day!
🔬 AI in healthcare and life sciences
Google DeepMind
announced a research partnership with the Radiotherapy Department at University College London Hospitals NHS Foundation Trust. The project focuses on improving the process of segmenting normal tissue from cancer in the head and neck region so that radiotherapy causes less collateral damage to non-cancer regions.
Slightly more left field, Elon Musk announced that he’s
made progress on a design for a neural lace. This would would effectively serve as an interface between our brains and a machine to avoid the benign situation that humans become “house cats” in the age of superintelligent AI.
🚗 Department of Driverless Cars
I attended NVIDIA’s GPU Technology Conference (GTC) in Amsterdam last week and was positively taken aback by the extent of the company’s investment into driving autonomy. Jen-Hsun Huang, who founded the company in 1993 and still leads as CEO, spent the better part of his 1.5h opening keynote talking through the integrated hardware and software platform NVIDIA is launching to power autonomy. These products and services are pluggable such that their 80+ partners can choose what they want to buy vs build. NVIDIA is clearly positioned to provide the shovels for the self-driving gold rush, much like Google’s TensorFlow enables the company to sell more compute infrastructure time. Announcements included:
-
DRIVE PX 2, an in-car GPU computing platform available in three configurations to enable automated highway driving (1x GPU @ 10 watts), point-to-point travel (two mobile processors + 2 GPUs) or full autonomy (multiple PX 2 systems).
-
DRIVEWORKS, a software development kit that provides a runtime pipeline framework for environment detection, localisation, planning and a visualisation dashboard for the passenger.
-
DGX-1, a deep learning “supercomputer” to train the multiple networks running on the DRIVE PX 2.
- The BB8 self-driving car (watch this video), which learned to drive in both rainy and dark conditions, take hard corners, navigate around cones and construction sites, and drive without needing any lane paths.
- A HD mapping partnership with TomTom built on the DRIVE PX 2 platform.
Mapillary, the Swedish company operating a crowdsourced street level imagery service joined UC Berkeley’s
Deep Drive where it will focus on semantic segmentation of real-world imagery and structure from motion to help drive research in deep learning and computer vision for autonomy.