|
|
November 6 · Issue #67 · View online
The Wild Week in AI is a weekly AI & Deep Learning newsletter curated by @dennybritz.
|
|
If you enjoy the newsletter, please consider sharing it on Twitter, Facebook, etc! Really appreciate the support :)
|
|
|
We can't compete - Universities are losing their best AI scientists
A handful of companies are luring away top researchers, but academics say they are killing the geese that lay the golden eggs. “It’s five times the salary I can offer. It’s unbelievable. We cannot compete”, said Maja Pantic, professor of affective and behavioral computing at Imperial.
|
Sony reboots Aibo with AI and extra kawaii
Sony claims Aibo’s new adaptive behavior includes being able to actively seek out its owners; detect words of praise; smiles; head and back scratches; petting, and more. The Aibo robot costs 198,000 JPY (~$1,735) but you also need a subscription plan to connect to the cloud service that powers Aibo’s AI.
|
Geoff Hinton unveils a new twist on Neural Networks
Google’s Geoff Hinton says that “the way we’re doing computer vision is just wrong. It works better than anything else at present but that doesn’t mean it’s right.” Hinton released two research papers about capsules ( here and here) that he says prove out an idea he’s been mulling for almost 40 years.
|
AI and Machine Learning to Revolutionize U.S. Intelligence
That was the message from Lt. General John “Jack” Shanahan who leads Project Maven, an effort launched in April to put machine learning and AI to work, starting with efforts to turn hours of aerial video surveillance collected by the U.S. military into actionable intelligence.
|
|
TensorFlow Eager Execution: A new imperative interface
Eager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. This is how frameworks such as PyTorch or Chainer work. Now you can use the same style in Tensorflow.
|
Backing off towards simplicity - why baselines need more love
When we lose accurate baselines, we lose our ability to accurately measure our progress over time. TLDR’ 1. Adopt a well-tuned baseline and give it the care that it deserves; 2. Complex models can poison forward progress in AI, be careful with throwing more compute onto a problem.
|
AlphaGo Zero – How and Why it Works
AlphaGo Zero does not need humans to show it how to play Go. Not only does it outperform all previous Go players, human or machine, it does so after only three days of training time. This article explains how and why it works and goes into details on AlphaGo’s Monto Carlo Tree Search implementation.
|
|
Pyro, a Deep Probabilistic Programming Language (from Uber)
Pyro, built on PyTorch, is an open source probabilistic programming language that unites deep learning with Bayesian modeling. The goal of Pyro is to accelerate research and applications of these techniques, and to make them more accessible to the broader AI community. Check out the examples here.
|
Mask R-CNN on Keras and TensorFlow
An implementation of the popular Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bounding boxes and segmentation masks for each instance of an object in the image.
|
Here’s Mario Kart played by a neural network
|
Earth on AWS Datasets
A collection of geospatial datasets available on AWS. These include satellite images, weather radar data, street map data, points of interest, and much more. Cloud Credits for Earth Observation Research are also available.
|
|
[1711.00489] Don't Decay the Learning Rate, Increase the Batch Size
It is common practice to decay the learning rate. The authors show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent, SGD with momentum, Nesterov momentum, and Adam.
|
[1711.00832] A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning (Google DeepMind)
An algorithm for general multiagent reinforcement learning, based on approximate best responses to mixtures of policies generated using deep reinforcement learning. The algorithm generalizes previous ones such as independent reinforcement learning, iterated best response, double oracle, and fictitious play. The algorithm is tested in two partially observable settings: Gridworld coordination games and Leduc poker.
|
[1711.00043] Unsupervised Machine Translation Using Monolingual Corpora Only (Facebook)
A model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. The authors demonstrate the model on two widely used datasets and language pairs, reporting BLEU scores up to 32.8, without using a single parallel sentence at training time.
|
Did you enjoy this issue?
|
|
|
|
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|