|
|
November 27 · Issue #69 · View online
The Wild Week in AI is a weekly AI & Deep Learning newsletter curated by @dennybritz.
|
|
If you enjoy the newsletter, please consider sharing it on Twitter, Facebook, etc! Really appreciate the support :)
|
|
|
Cogent Labs is a well funded artificial intelligence startup located in the heart of Tokyo. Its goal is to bridge the gap between academic research in deep learning and real-world business solutions. Cogent Labs is currently working on diverse problems including natural language processing, image understanding, and financial time series. Cogent Labs is a diverse company, with members coming from more than 15 different countries and the internal communication language is English. It is growing its team and looking to hire talented research scientists, research engineers, and software engineers. You can apply through https://www.cogent.co.jp/en/careers/.
|
|
Apple discloses self-driving car research
Apple’s latest academic paper called VoxelNet, submitted to arXiv on November 17, focuses on how to get more information out of patchy mapping data to better spot bikes, pedestrians, and street signs.
|
Amazon launches "ML Solutions Lab"
The new program connects machine learning experts from across Amazon with AWS customers to help identify uses of machine learning inside customers’ businesses, and guide them in developing new machine learning-enabled features, products, and processes.
|
Nature launches new Machine Intelligence journal
Launching in January 2019, Nature Machine Intelligence is an online-only journal for research and perspectives from the fast-moving fields of artificial intelligence, machine learning, and robotics.
|
Google Cloud: Lower prices for GPUs and local SSDs
Google Cloud is cutting the price of NVIDIA Tesla GPUs attached to on-demand Google Compute Engine virtual machines by up to 36 percent. In US regions, each K80 GPU attached to a VM is now priced at $0.45 per hour while each P100 now costs $1.46 per hour.
|
Samsung to establish an AI research hub
Samsung Electronics said on Wednesday it would create an Artificial Intelligence research center and strengthen an executive role to look for new business areas for all its three major business groups.
|
|
Expressivity, Trainability, and Generalization in Machine Learning
Contributions of Machine Learning papers can roughly be categorized into improvements to 1. Expressivity 2. Trainability, and/or 3. Generalization. But how good are at these within the paradigms of Supervised, Unsupervised and Reinforcement Learning?
|
High-fidelity speech synthesis with WaveNet
DeepMind’s latest WaveNet model is used to generate realistic-sounding voices for the Google Assistant globally in Japanese and US English. This production model known as parallel WaveNet is more than 1000 times faster than the original and also capable of creating higher quality audio. Read the new research paper here.
|
This AI spots art forgeries by looking at individual brushstrokes
Detecting art forgeries is hard and expensive. A new RNN-based system, described in this paper from a team at Rutgers University, breaks a work down into individual brush or pencil lines and can figure out the artist behind it.
|
Capsule Networks: Tutorial (video)
Wondering what Capsule Networks, Hinton’s latest architecture invention, are about? This video offers a great explanation. Reading the paper beforehand is recommended.
|
|
More than a million Pro-Repeal Net Neutrality Comments were likely faked
Extremely interesting post on using NLP to analyze net neutrality comments submitted to the FCC from April to October 2017. The results were disturbing. One pro-repeal spam campaign used mail-merge to disguise 1.3 million comments as unique grassroots submissions.
|
Audio Analysis With Wavenet, MFCCs, UMAP, t-SNE and PCA
A project exploring an audio dataset in two dimensions. It covers algorithms such as NSynth, UMAP, t-SNE, MFCCs and PCA, shows how to implement them in Python using Librosa and TensorFlow. It includes visualizations to interactively explore the audio dataset in two-dimensional plots.
|
11k Hands Dataset
The 11k Hands dataset is a collection of 11,076 hand images (1600 x 1200 pixels) of 190 subjects, of varying ages between 18 - 75 years old. Each subject was asked to open and close his fingers of the right and left hands. Metadata associated with each image which includes: (1) the subject ID, (2) gender, (3) age, (4) skin color, and (5) a set of information of the captured hand, i.e. right- or left-hand, hand side (dorsal or palmar), and logical indicators referring to whether the hand image contains accessories, nail polish, or irregularities.
|
Open Images Dataset Update: V3 Released
The dataset now includes 3.7M bounding-boxes and 9.7M positive image-level labels on the training set and all images can now be downloaded from the Common Visual Data Foundation.
|
|
[1711.06396] VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection (Apple)
VoxelNet is a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin.
|
[1711.07971] Non-local Neural Networks
Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time. In this paper, the authors present non-local operations as a generic family of building blocks for capturing long-range dependencies. This building block can be plugged into many computer vision architectures.
|
[1704.03732] Deep Q-learning from Demonstrations (Updated)
Deep RL algorithms typically require a huge amount of data before they reach reasonable performance. In this paper, the authors study a setting where the agent may access data from previous control of the system. Deep Q-learning from Demonstrations (DQfD) leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games.
|
Did you enjoy this issue?
|
|
|
|
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|