News in artificial intelligence and machine learning

From 4th October through 22nd November 2016Welcome to issue #16 of my newsletter covering news in AI.
November 22 - Issue #16

Nathan Benaich

An analytical digest of artificial intelligence and machine learning news from the technology industry, research lab and venture capital market.

From 4th October through 22nd November 2016
Welcome to issue #16 of my newsletter covering news in AI. If anything piques your interest or you’d like to share a piece that’s caught your eye that I missed, just hit reply. Two quick points:
1. If you’re in London this Thursday 24h Nov, do come and say hi at our 6th London.AI meetup featuring Michele Sama of Gluru, Aneesh Varma of Aire and Marion Gasperment of Cardiologs.
2. I’ve been thinking about the right medium through which to catalyse the open sharing of problems faced by enterprises and solutions that startups and researchers can provide. Too often do I hear of talented AI teams struggling to find use cases and enterprises not knowing how or where to find (appropriate) AI-driven solutions. Thoughts, anyone?
I’m actively looking for entrepreneurs building companies that leverage AI to solve interesting, high-value problems in any industry. Do get in touch 👍
Referred by a friend? Sign up here. Help share by giving this it a tweet :)

Technology news, trends and opinions
🚗 Department of Driverless Cars
Intel CEO, Brian Krzanich, announced that Intel Capital will invest $250m in the next two years in the autonomous vehicle (AV) ecosystem, focused on problems in connectivity, communication, context awareness, deep learning, security and safety. When viewed in the context of the fund’s short-lived intention to sell $1bn worth of portfolio holdings in March this year (it was cancelled in May), I think this shows Intel is serious on going long with AI. Indeed, the company purchased recently Nervana Systems and Movidius, which could help it’s larger AV program and the race against NVIDIA.
Nauto, the startup offering a direct to consumer network of cloud-connected dashboard cameras applied to car insurance, inked a data sharing agreement and investment from Toyota Research Institute, BMW iVentures and Allianz Ventures (thanks Moritz for sharing!). One of the reasons for the immense progress in AI is data crowdsourcing. Collaborations between startups and incumbents who have mature products and distribution scale makes a lot of sense. Others to watch: Nexar (dashboard camera for road safety) and Mapillary (crowdsourced street maps).
It’s been a phenomenal Q3 for NVIDIA, which blew public market analysts away by recognising $2bn in revenue, up 54% from last year. The share price popped 30% in a single day, which tells me that public markets have yet to truly appreciate the impact of AI. While the lionshare of the company’s revenue came from gaming, it launched the Drive PX 2 platform, a partnership with Baidu for their AVs and a data collection partnership with TomTom. So much more to come I’m sure!
University of Michigan, which has a simulated urban and suburban environment for testing automated and connected vehicles, launched open-access automated cars for academics and industry partners to advance driverless research. The cars are powered by PolySync middleware (check out the open source project here).
After an audacious talk at TC Disrupt announcing a driverless car kit, founder George Hotz received a special order from the National Highway and Traffic Safety Administration to answer detailed questions on safety, testing and performance. Instead of complying, George stated that “dealing with regulators and lawyers… isn’t worth it” and pulled the product entirely. Unsurprisingly cowboy-like!
Auto OEMs are working with an Automotive Grade Linux operating system for their connected car projects without the involvement of Apple or Google. 
🏥 Healthcare and life sciences
New York’s Mount Sinai Hospital will be using an electronic patient record data processing platform built by CloudMedx to identify patients at risk of developing congestive heart failure, a condition affecting an estimated 5 million Americans.
HBR walks through how a visitor to an AI-infused hospital might be like. It helps frame how AI introduces a new user experience paradigm centered around contextual awareness, personalisation and seamless interaction with the physical and digital world.
A Harvard research group has developed a system that uses generative deep learning models trained on chemical structures to output novel structures using representations of learned chemical knowledge. This is an exciting means to supercharge exploration of a complex search space.
📱 Digital assistants and context awareness
Slack, the business messaging platform used by 4 million people every day, will be stepping up its game with AI-driven productivity tools. Stuart Butterfield made the case for building an application that draws from enterprise resource planning, marketing, sales, business intelligence and other enterprise systems to answer complex queries that otherwise require painfully inefficient searching through troves of data. This will probably be the fruits of Noah Weiss’s “Search, Learning and Intelligence” team set up in January, which now counts two dozen machine learning engineers.
🎓 Academic poaching tracker
Rus Salakhutdinov, Associate Professor of Machine Learning at CMU, whose group publishes widely in deep learning was hired by Apple as their inaugural Director of AI Research. His work has explored transfer learning (the ability for models to train on one task/data type and be used on a different one), reinforcement learning and unsupervised learning. The team is growing!
Fei Fei Li, Director of the Stanford Artificial Intelligence Lab and Stanford Vision Lab, was hired by Google to lead their Google Cloud Machine Learning group along with Jia Li, who is Head of Research at Snap Inc. and did her PhD work with Fei Fei Li. Both were involved in building ImageNet, the large-scale image database that helped catalyse breakthroughs in computer vision.
🌐 AI is everywhere, for everyone
Google announced a many updates to their Cloud Platform offering. First, they launched a Cloud Jobs API which uses machine learning to understand how job titles and skills relate to one another and what job content, location, and seniority are the closest match to a jobseeker’s preferences. Second, developers will be able to access NVIDIA and AMD GPUs in the cloud starting from 2017! Third, the company dropped pricing for the Cloud Vision API as a result of running the models on their proprietary TPU hardware. Fourth, the Cloud Natural Language API is pushed publicly and the Translation API is live with their state of the art Neural Translation Machine. The company also announced a $4.5m grant to the Montreal Institute for Learning Algorithms, which notably includes Yoshua Bengio. The Montreal office will also open a deep learning and AI research group.
The Backchannel features a piece on Google’s Assistant product, which runs across multiple of the company’s products including Home and the Pixel phone. Namely, it talks about The Transition - a two year period of AI training that will help Google “move from systems that are explicitly taught to ones that implicitly learn.” Fernando Pereira, who leads Google’s projects in natural language understanding, likens the launch of the Assistant to that of Search: “It’s going to be way more fluent, more able to help you do what you want to, understand more of the context of the conversation, be more able to bring information from different sources together.”
Bryan Johnson, founder of Braintree and OSFund, opined a piece on his newest venture, Kernel, which seeks to (wait for it…) build the world’s first implantable neural prosthetic for human intelligence enhancement. This will be a long but fascinating journey: Bryan suggests that “each market approved product we create will require approximately $200M and 7–10 years”. Without any details on roadmap and how it works, hard to say much! Watch this space.
Microsoft signs a cloud partnership with OpenAI, which will see the research institute run their experiments on NVIDIA Tesla and Pascal GPUs on the Azure cloud. This follows Microsoft’s mission to democratise access to AI - the company is picking up some serious steam.
Google DeepMind and Blizzard announced a collaboration to open up StarCraft II as a complex testing environment for AI research. The game requires exploration of partially observable environments, long-term planning, memory and multi-agent collaboration, making it rather fascinating. More resources on StarCraft in AI are available on GitHub here.
🔮 Preparing for the future
The US National Science and Technology Council’s Subcommittee on Machine Learning and Artificial Intelligence publish a whitepaper entitled “Preparing for the future of artificial intelligence”. It explores the current state of AI, its existing and potential applications, and the questions that are raised for society and public policy by the progress in AI. Here are some important recommendations for US Federal agencies (page 40):
  • Prioritize open training data and open data standards in AI.
  • Explore the potential to create DARPA-like organizations to support high-risk, high-reward AI research and its application.
  • Draw on appropriate technical expertise at the senior level when setting regulatory policy for AI-enabled products.
  • Prioritize basic and long-term AI research.
  • Ensure the efficacy, fairness and evidenced-based explainability of consequential decisions made by AI-based systems about individuals.
On the topic of accountability, this piece sets out five key principles for technologists to onboard: responsibility, explainability, accuracy, auditability and fairness. Nature magazine run two pieces (here and here) exploring this black box problem.
In the EU, the General Data Protection Regulation that will come into effect in 2018 prohibits any automated decision that “significantly affects” EU citizens. This includes techniques that evaluate a person’s “performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” What’s more, the rule gives the right to EU citizens to review how a particular service made a particular algorithmic decision.
Outgoing President Obama sat down with MIT Media Lab Director, Joi Ito, to discuss AI. The technology, in his words, “promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity.”
British Prime Minister Theresa May announced an ambitious Industrial Strategy Challenge Fund to help Britain “capitalise on its strengths in cutting-edge research like AI and biotech”, as well as further tax credits and government investment worth £2 billion per year by 2020 for R&D.
WIRED’s Rowland Manthorpe runs a profile piece on the Leverhulme Centre for the Future of Intelligence, a think tank that seeks to explore the nature and impact of AI. Projects include trust and transparency (i.e. interpretability) of AI models, policy and responsible innovation and kinds of (general) intelligence.
Here’s a chart from the World Economic Forum on the change in share of jobs from 1980 to 2012, which shows that many of the jobs that AI is suggested to automate away have indeed already fallen.
Facebook’s News Feed came under significant scrutiny over the proliferation of fake content and the echo chambers that it can create, namely in the context of the Trump/Clinton election campaign (see Blue Feed, Red Feed). Tim O’Reilly explores the problem of editorial curation in a world of infinite information and limited attention. In the piece, Tim and Matt Cutts, former head of the web spam team at Google, rightly state that while Facebook’s pursuit of engagement on its content (vs. link quality for Google search) might optimise for revenue, it ends up producing “shady stories, hoaxes, incorrect information, or polarizing memes as an unintended consequence.”
Research, development and resources
Hybrid computing using a neural network with dynamic external memory, Google DeepMind. In order to solve complex tasks that involve complex relationships that develop with experience, neural networks require memory. In this work, the authors propose a differentiable neural computer (DNC) which separates a neural network from an external memory matrix it can read and write from (vs. having the memory be fundamentally integrated into the networks processing). They show that the DNC can be trained on complex graphs such as the London Underground to learn how to answer synthetic questions that require reasoning and inference. It can also be trained with reinforcement learning to solve a blocks puzzle problem that involves changing goals. What’s more, it is becoming possible to address how the network is producing its outputs by observing what parts of its memory it is accessing to do so.
Semi-supervised knowledge transfer for deep learning from primitive training data, Penn State, Google, Google Brain and OpenAI. As AI models are applied to more sensitive applications, e.g. consumer finance and medicine, we need methods to ensure models cannot be exploited to reveal the sensitive data on which they were trained. Here, the authors use a “teacher” and “student” model paradigm to afford differential privacy to neural networks. First, multiple teacher networks are trained on a disjoint subset of sensitive dataset. Next, a student network is tasked with predicting an aggregate output of the teacher networks using auxiliary, unlabeled non-sensitive data and the generative adversarial network training framework (generator/discriminator). This paradigm can be understood in terms of differential privacy, whereby no single teacher and thus no single sensitive dataset dictates the student’s training.
Learning to protect communications with adversarial neural cryptography, Google Brain. In a similar spirit to the above paper, the authors of this paper demonstrate how two neural networks can learn to protect their communications in order to satisfy a policy specified in terms of a third adversary network. Specifically, they use the generative adversarial network framework to train two neural networks (Alice and Bob) to hide their communications from Eve’s eavesdropping. Importantly, Alice and Bob discover a cryptosystem for this purpose without a pre-specified notion of what known cryptosystem they should implement.
1 year, 1000km: The Oxford RobotCar Dataset, University of Oxford. An outstanding challenge for autonomous vehicles is their ability to adapt and navigate environments that they might not have encountered previously. To this end, the Mobile Robotics Group in Oxford drove 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year with an autonomous Nissan LEAF. The dataset includes 20 million images collected from 6 cameras mounted to the vehicle, along with LIDAR, GPS and INS ground truth. This will help research investigating long-term localisation and mapping for autonomous vehicles in real-world, dynamic urban environments. (See website with documentation, datasets and examples)
Video Pixel Networks, Google DeepMind. The task of observing the frame of a video and predicting future frames in such a way that does not introduce systematic artifacts (e.g. blurring) remains a challenge even on simple benchmarks. This is because video is inherently complex and ambiguous. Here, the authors build a two-part generative video model that encodes the 4-dimensional structure of video tensors and captures dependencies in the time dimension of the data, in the two space dimensions of each frame and in the color channels of a pixel. This makes it possible to model the stochastic transitions locally from one pixel to the next and more globally from one frame to the next without introducing independence assumptions in the conditional factors. The test on the moving MNIST dataset and a Robotic Pushing dataset.
Joint multimodal learning with deep generative models, University of Tokyo. Machine learning models in production today are typically trained and operate on data of a single modality, i.e. only text or only images. However, information in the real world is represented through various modalities. Here, the authors present a joint multimodal variational autoencoder - a generative model that can extract a joint representation that captures high-level concepts among all modalities it is trained on (e.g. text and images). With this model, the authors show that we can exchange this representation bi-directionally, that is to say the model can generate and reconstruct images from corresponding text and vice versa.
Sam DeBrule compiles a terrific set of resources, people, companies and events to help you follow the magical world of AI.
Nervana Systems (acq. Intel) release the Nervana Graph, a Python library for implementing programs that convert descriptions of neural networks into programs that run efficiently on a variety of platforms. It offers a unifying computational graph to give users composable deep learning abstractions, allows them to execute models with maximum computational efficiency and do so on any hardware configuration.
Version 3 of the “current state of machine intelligence” landscape is fresh off the press, bigger and more complex.
A high-level peek at’s data science architecture, which uses natural language processing and generation to build an (increasingly) automated meeting scheduler.
Stephen Merity of Salesforce Research (ex-MetaMind) combs through ICLR 2017 submissions and selects his favorites.
Martin Goodson draws on his experience as a technical data scientist at the interface between R&D and commercial operations to analyse the 10 most common failure modes of data science.
Venture capital financings and exits
Two big deals for AI-driven life sciences companies: Recursion Pharmaceuticals, which develops new therapeutic interventions for rare diseases, and Zymergen, which brings predictability and reliability to the engineering of microbes that useful molecules, raised a $49m Series A and $130m Series B, respectively. Read more about why these companies are special: Zavain Dar of Lux Capital on Recursion and the Data Collective team on Zymergen. Congrats!
56 companies raised $654m over 57 financing rounds, 40 of which were in US companies, from 144 investors. Median deal size was $6.4m (up from $2m in last issue) at a pre-money valuation of $33.5m (up from $7.7m in last issue) due to later stage rounds. Deals include:
  • Graphcore, a Bristol-based hardware startup spun out of XMOS, raised a $30m Series A led by Robert Bosch VC to bring its intelligence processing unit (IPU) chip to production. The IPU is purpose built silicon for running machine learning algorithms on graphs. Read more about the company here. Very exciting to see serial UK entrepreneurs work on technology that could bring a step change to the industry.
  • Voyager Analytics, a previously stealth startup based in Israel developing deep learning and expert systems for risk assessment, crisis management, intelligence, and fraud protection in the public, retail, consulting and financial sectors, raised $100m from Horizons Ventures and OCAPAC Holding Company.
  • Clarifai, the company commercialising deep learning-based image analysis tools for developers, raised a $30m Series B led by Menlo Ventures.
  • Ravelin, a London-based company offering a real-time fraud detection and prevention platform for online businesses, raised a £3m Series A led by us at Playfair Capital. I’m very excited about this one!
6 companies were acquired for undisclosed sums, including:
  • was acquired by GE Digital (NYSE: GE) to strengthen the machine learning and data science capabilities for the expansion of GE’s Predix Platform and to enable enhanced Digital Twin development. The business raised $6.55m and employed 14 FT, led by UC Berkeley Professor of Astronomy, Joshua Bloom.
  • Sensai, an online content analysis company, was acquired by Sovereign Intelligence, which uses AI to enable commercial and government entities to understand threats from external and internal sources.
  • Datacratic, a real-time predictive analytics company applied to adtech, was acquired by iPerceptions to provide first and third party audience intent segmentation capabilities.
Anything else catch your eye? Do you have feedback on the content/structure of this newsletter? Just hit reply!
I’m actively looking for entrepreneurs building companies that build/use AI to rethink the way we live and work.
Did you enjoy this issue?
Thumbs up 1ae5a7bdfcd3220e2b376aa0c1607bc5edaba758e5dd83b482d03965219a220b Thumbs down e13779fa29e2935b47488fb8f82977fedcf689a0cc0cc3c19fa3c6bb14d1493b
Carefully curated by Nathan Benaich with Revue.
If you were forwarded this newsletter and you like it, you can subscribe here.
If you don't want these updates anymore, please unsubscribe here.