|
|
October 9 · Issue #64 · View online
The Wild Week in AI is a weekly AI & Deep Learning newsletter curated by @dennybritz.
|
|
If you enjoy the newsletter, please consider sharing it on Twitter, Facebook, etc! Really appreciate the support :)
|
|
|
Google’s "Pixel Buds" do real-time language translation
A conversation onstage during Google’s Pixel hardware event translating from English to Swedish went off without a hitch. The translations followed about 1-2 seconds after the people finished their sentences. You’ll need a Pixel 2 to take advantage of this feature unfortunately.
|
WaveNet launches in the Google Assistant
Twelve months after the publication of the WaveNet paper, a deep neural network for generating raw audio waveforms, the model is now used to generate Google Assistant voices for US English and Japanese across all platforms. This required improvements that made the generation ~1,000x faster.
|
DeepMind launches Ethics & Society initiative
The new team has a dual aim: to help technologists put ethics into practice and to help society anticipate and direct the impact of AI so that it works for the benefit of all.
|
How AI Could Change Amazon: A Thought Experiment
As advances in AI make prediction cheaper, economic theory dictates that we’ll use prediction more frequently and widely, and the value of complements to prediction – like human judgment – will rise. But what does all this mean for strategy?
|
|
Evaluation procedures for the Atari 2600 domain
|
Learning Diverse Skills via Maximum Entropy Deep RL
Standard deep RL algorithms aim to master a single way to solve a given task, typically the first way that seems to work well. However, finding such a single solution may be undesirable - knowing only one way to act makes agents vulnerable to environmental changes that are common in the real world.
|
GANs are Broken in More than One Way: The Numerics of GANs
A nice review of The Numerics of GANs paper. Interesting insight: It is natural to think of GAN training as a special case of neural network training, but in fact, it is the other way around: Simultaneous gradient descent is a generalization, rather than a special case, of gradient descent.
|
|
Teachable Machine (Google)
This experiment lets anyone explore how machine learning works in a hands-on way. You can teach a machine using your camera, live in the browser – no coding required.
|
Apple: Converter tools for Core ML
The Core ML community tools contain all supporting tools for CoreML model conversion and validation. This includes Scikit Learn, LIBSVM, Caffe, Keras and XGBoost.
|
TorchMoji : A PyTorch implementation of the DeepMoji model
This model trained on 1.2 billion tweets with emojis to understand how language is used to express emotions. Through transfer learning the model can obtain state-of-the-art performance on many emotion-related text modeling tasks. Check out the original model here.
|
|
[1710.01813] Neural Task Programming: Learning to Generalize Across Hierarchical Tasks
A novel robot learning framework called Neural Task Programming (NTP), which bridges the idea of few-shot learning from demonstration and neural program induction. NTP takes as input a task specification (e.g., video demonstration of a task) and recursively decomposes it into finer sub-task specifications. These specifications are fed to a hierarchical neural program, where bottom-level programs are callable subroutines that interact with the environment.
|
[1710.00756] Neural Color Transfer between Images
A new algorithm for color transfer between images that have perceptually similar semantic structures. The algorithm uses neural representations for matching and optimizes a local linear model for color transfer satisfying both local and global constraints.
|
[1710.02298] Rainbow: Combining Improvements in Deep Reinforcement Learning
The deep reinforcement learning community has made several independent improvements to the DQN algorithm. However, it is unclear which of these extensions are complementary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. The experiments show that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. The authors also provide results from a detailed ablation study that shows the contribution of each component to overall performance.
|
Did you enjoy this issue?
|
|
|
|
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|