If you enjoy the newsletter, please consider sharing it on Twitter, Facebook, etc! Really appreciate
|
|
October 23 · Issue #66 · View online
The Wild Week in AI is a weekly AI & Deep Learning newsletter curated by @.
|
|
If you enjoy the newsletter, please consider sharing it on Twitter, Facebook, etc! Really appreciate the support :)
|
|
|
AlphaGo Zero: Learning from scratch
AlphaGo Zero is the latest evolution of AlphaGo. Zero is more powerful and is arguably the strongest Go player in history. Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0. Read the paper here.
|
Intel unveils new family of AI chips to take on Nvidia’s GPUs
The Intel Nervana Neural Network Processor family, or NNP for short, is meant as a response to the needs of machine learning, and is destined for the data center. The NNP chips are a direct result of Intel’s Nervana acquisition. There are no benchmarks yet and the exact details of the chips are still unknown.
|
Modern Love: Are We Ready for Intimacy With Robots?
Hiroshi Ishiguro builds androids. Beautiful, realistic, uncannily convincing human replicas. Academically, he is using them to understand the mechanics of person-to-person interaction. But his true quest is to untangle the ineffable nature of connection itself.
|
|
AMA: DeepMind’s AlphaGo team
David Silver and Julian Schrittwieser from DeepMind’s AlphaGo team answered Reddit questions on October 19th. Check out their answers for novel insights into AlphaGo Zero and the team’s future goals.
|
Word embeddings in 2017: Trends and future directions
Word Embeddings (such as word2vec) have had a large impact on the field of NLP. This post addresses some of their deficiencies and discusses how recent approaches have tried to resolve them.
|
Deep Learning Book Club (videos)
A collection of companion videos for chapters of the The Deep Learning Book. Sessions were given by a variety of speakers, including one of the book authors, Ian Goodfellow. If you are reading the book, this is an excellent companion resource.
|
Generalizing from Simulation (OpenAI)
New techniques allow robot controllers, trained entirely in simulation and deployed on physical robots, to react to unplanned changes in the environment as they solve simple tasks.
|
|
AVA: Dataset for Human Action Understanding
The dataset consists of URLs for publicly available videos from YouTube, annotated with a set of 80 atomic actions (e.g. walk, kick shake hands) that are spatial-temporally localized, resulting in 57.6k video segments, 96k labeled humans performing actions, and a total of 210k action labels. Browse the dataset here.
|
Nervana Coach: Reinforcement Learning Framework
Coach is a python reinforcement learning research framework containing implementations of many state-of-the-art algorithms. The documentation also contains excellent summaries of various algorithms. The code available on Github.
|
Horovod: Uber's Distributed Deep Learning Framework
Horovod is a distributed training framework for TensorFlow. The goal of Horovod is to make distributed Deep Learning fast and easy to use. Get the code on Github.
|
|
Mastering the game of Go without human knowledge (Nature)
An algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, the new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.
|
[1710.05381] A systematic study of the class imbalance problem in convolutional neural networks
The authors systematically investigate the impact of class imbalance on classification performance of convolutional neural networks. They use three benchmark datasets, MNIST, CIFAR-10 and ImageNet, and compare several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities.
|
[1710.05468] Generalization in Deep Learning
This paper explains why deep learning can generalize well, despite large capacity and possible algorithmic instability, nonrobustness, and sharp minima, effectively addressing an open problem in the literature. Based on our theoretical insight, this paper also proposes a family of new regularization methods.
|
[1710.06922] Emergent Translation in Multi-Agent Communication
A communication game where two agents, native speakers of their own respective languages, jointly learn to solve a visual referential task. The ability to understand and translate a foreign language emerges as a means to achieve shared goals.
|
Did you enjoy this issue?
|
|
|
|
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|