View profile

Artificial Intelligence Tech. Update - Issue #13

Revue
 
Few HighlightsThe pace of progress in AI both in industry and research communities is absolutely asto
 
July 8 · Issue #13 · View online
Artificial Intelligence Technology Update
Few Highlights
The pace of progress in AI both in industry and research communities is absolutely astounding.
20 out of 56 companies selected by World Economic Forum as Technology Pioneers use AI extensively to further their business objectives (more …).
Facebook just open-sourced their DLRM (Deep Learning Recommendation Model) that is a top-performing recommender engine implemented using Facebook’s PyTorch and Caffe2 platforms. So if you like the recommendations that you get on Facebook, you can adapt their model to serve your specific use case.
As for machine language models, researchers at Google proposed XLNet (Generalized Autoregressive Pre-training for Language Understanding) which is a new pre-training method for language models that achieved better performance than previous pretraining approaches based on autoregressive language modeling. (paper here)
As for AI Chips …
Two new companies have joined the community of AI accelerator chip vendors. Israeli startup NeuroBlade emerged from stealth mode and closed a $23M round of funding (total of $27M to date). Founded in 2017 the company is developing a smaller, cost efficient AI chip that uses unique technology to “solve tomorrow’s problems”. Aspinity a Pittsburgh-based company building accelerator chips utilizing the analog neuromorphic approach. They are promoting Reconfigurable Analog Modular Processor (RAMP) platform, an ultra-low power, analog processing platform.
Brainchip announced the availability of the Akida Neuromorphic IP. Their approach is based on event-based spiking neural network (SNN) and can be implemented using digital logic process. They also signed a definitive agreement with Socionext to develop a System-on-Chip based on this IP . . more
Ambiq Micro, low-power MCU vendor; announced that Apollo3 Blue wireless SoC (ARM Cortex M4F @ 96MHz), has achieved unprecedented active power consumption of 6uA/MHz … more
Slowdown in Semiconductor Sales
Clearly the most worrying business issue in the world of semiconductors is the continued decline in global sales for five straight months. Total global sales fell to $33.1B in May (-14% vs. May of 2018). The decline was widespread across all regions. Based on World Semiconductor Trade Statistics (WSTS) forecast the annual global semiconductor sales will drop 12.1% (reaching $412B) in 2019. The following are a few factors that have contributed to the decline:
  1. Trade war between US and China
  2. A dramatic drop in ASPs (Average Selling Price) of memories
  3. An overall global economic slowdown
  4. Trade skirmishes with Huawei is probably not a contributing factor yet, but it will certainly cause further harm if it persists

Baidu Named Development Partner on Intel Nervana Neural Network Processor for Training
OCP China Day Integrates AI, Edge Computing, and 5G Topics with Open Computing
A Brief Look at tinyML
The official profile of tinyML is the following:
“tinyML is broadly defined as machine learning architectures, techniques, tools and approaches capable of performing on-device analytics for a variety of sensing modalities (vision, audio, motion, environmental, human health monitoring etc.) at “mW” (or below) power range targeting predominately battery operated devices (IoT, bioelectronics, …)”
Terms such as “<1mW”, “AI at the node”, “battery-operated”, and “low footprint” captures the essence of this initiative. tinyML is going beyond AI at the edge or the device. It is trying to address the inference hardware that are embedded in image sensors, IoT nodes, Inertial Measuring Units (IMUs), Ultra Low Power Sensors, and wearables (Let us call this category “Extreme Edge”). GPUs are costly and power-hungry. MCUs is all we can get. Forget “MB” and “GB”, think “kB”. Forget “Watts” and think “mWatts”.
The concept has clearly struck a chord and the organization has attracted nearly 100 adopters.
My Take:
Although the need, use cases, and the good intentions are there; I see the traditional implementations of Deep Learning hardware becoming more power efficient over time and likely to encroach on tinyML’s territory (the Extreme Edge). Additionally, the computational requirements of devices at the Extreme Edge will undoubtedly rise over time and will most likely require more horsepower than what MCUs can muster. I am hoping that history will prove me wrong.
IoT Nets in Two-Horse LPWAN Race | EE Times
My Take:
Success or failure of the IoT phenomena is very much dependent on having a low-cost and low-power wireless connectivity technology (LPWAN). Afterall there will eventually be billions of IoT nodes deployed and they all need to talk to each other. Addressing such a big market opportunity has been alluring and nearly a dozen companies have been developing low-power and low-speed wireless technologies to get a piece of the action. It seems like Narrowband IoT (NB-IoT) and LoraWAN have emerged as the technological leaders and will most likely claim the bulk of the market. Having too many competing technologies can cause market fragmentation and leveraging the economies of scale becomes much harder.
Odds & Ends
A Remarkable and Impactful Paper
“Neural Discrete Representation Learning” is a much discussed paper from folks at Google. The authors propose a new approach to image generation that can be directly applied to some current use cases. They propose a new generative model based on Vector Quantised Variational AutoEncoder (VQ-VAE) that is able to generate remarkable images with stunning clarity. This is an alternative to Generative Adversarial Networks (GANs) which are hard to train and suffer from Mode Collapse and Bias (trust me dealing with them is not easy)
Ever asked yourself why we need AI models that are able to generate realistic pictures and sounds that can be used to mislead?
There are many ways to answer this question but my favorite is the following. What sets humans apart from machines is our ability to “imagine” and “create” great things (such as paintings, music, and etc.) that do not yet exist. To be able to build robots that come close to humans in abilities, we should find a way to teach them to go beyond just doing mundane and repetitive tasks and gain the ability to synthesize unique and meaningful things. Now just like anything else, technology can be put to good use or bad use. The choice is ours.
A Wonderful Deep Learning Cheat Sheet
Came across this remarkable cheat sheet Montreal AI
Hope you have benefited from this issue. Please forward to others if you find value in this content. I always welcome feedback.
Al Gharakhanian
info@cogneefy.com | www | Linkedin | blog | Twitter
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue