View profile

Facebook's RoBERTa, Bitfusion, Alibaba's Open-Source IP, Free AI Books - Issue #15

August 1 · Issue #12 · View online
AI   Speech   &  Language  Processing  Update
Facebook AI's RoBERTa improves Google's BERT pretraining methods | VentureBeat
VMware acquires AI chip virtualization startup Bitfusion - SiliconANGLE
Alibaba Is Open-Sourcing Its Powerful New RISC-V Processor for 5G and AI | Synced
The Business Angle
It is earnings season and a ton can be learned by listening to the the earnings calls. The following are just a few tidbits that are relevant to AI:
Networking infrastructure business is booming thanks to 5G. Their business in this domain is growing at a 40% CAGR. Revenues from Mobileye (AI accelerators for automotive applications) grew 22% last year. They see AI data center silicon opportunity to reach $10B by 2023.
Commercial Cloud revenue was $11 billion, growing nearly 40% compared to last year. Revenue from Azure cloud service alone rose 64%. Microsoft is investing $1 billion in OpenAI to support building artificial general intelligence (AGI) capabilities. The ultimate goal is to develop hardware and software platform in Azure to scale AGI.
Amazon (AWS):
AWS revenues reached $8.4B in the quarter. The growth slowed to 37% after topping 40% in previous quarter.
Semiconductor Industry Slowdown:
According to Gartner, semiconductor industry revenue is expected to drop 10% ($429B) this year. Worldwide economic slowdown, memory price erosion, and China-U.S. trade war to blame.
As for AI chips . . . .
Syntiant is based in Irvine, California and is building extremely power efficient chips used for audio “keyword spotting” and “wake word detection”. Truly impressive power numbers (150 µW in active mode). Targeting smart speakers, hearing aids, earbuds, security cameras, among many other devices needing basic voice activation capabilities. I witnessed their demo and was impressed by how well the chip can detect spoken keywords in presence of significant background noise. They utilize ARM Cortex-M0 core and can support two PDM (Pulse Density Modulation) microphones as well as a variety of other interfaces. The chip is tiny (1.4 ×1.8 mm, 12-ball WLBGA), much smaller than the microphone itself.
One thing that impresses me the most is their unwavering focus on a niche application that is growing leaps and bounds. No intentions to be everything to everyone. They apply their core low-power technology to build the most power efficient chips for tiny devices that have to operate at a super low power regime. The focus is not TOPs/W, it is design wins and customer traction.
Qeexo is not a semiconductor company but it offers a truly innovative services that can add tremendous value to various lower-end inference platforms. Their customers are OEMs that have to contend with dealing large volumes of sensor data and badly need to extract insight from it. They provide the labelled sensor readings to Qeexo to be fit to a predictive model. Qeexo uses the labeled data to find and train an optimized and low-footprint machine learning models that can run on the compute platform of customer’s choice to handle predictive and classification tasks. The company is venture backed and is spun out of Carnegie Mellon University.
Research / Papers
A new approach proposed by Facebook AI Research Lab that aims to reduce the memory footprint of neural network architectures by quantizing (or discretizing) their weights, while maintaining a short inference time thanks to its byte-aligned scheme. The approach relies on product quantization, and adapts it to reconstruction of activations, not on the weights themselves.
They were able to achieve astounding compression ratios while training ResNet-50 using semi supervised learning. The compressed model weighs only 5 MB (20x compression factor) and preserves the top-1 accuracy of a vanilla ResNet-50 on ImageNet (76.1 percent).
See the paper here.
Sometimes the best technical books are free. Following is a list:
  1. “Introduction to Deep Learning” by Andrew Ng
  2. “Python Data Science Handbook” by Jake VanderPlas
  4. “The Deep Learning Textbook” by Ian Goodfellow and Yoshua Bengio and Aaron Courville
Hope you have benefited from this issue. Please forward to others if you find value in this content. I always welcome feedback.
Al Gharakhanian | www | Linkedin | blog | Twitter
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue