The official profile of tinyML
is the following:
“tinyML is broadly defined as machine learning architectures, techniques, tools and approaches capable of performing on-device analytics for a variety of sensing modalities (vision, audio, motion, environmental, human health monitoring etc.) at “mW” (or below) power range targeting predominately battery operated devices (IoT, bioelectronics, …)”
Terms such as “<1mW”, “AI at the node”, “battery-operated”, and “low footprint” captures the essence of this initiative. tinyML is going beyond AI at the edge or the device. It is trying to address the inference hardware that are embedded in image sensors, IoT nodes, Inertial Measuring Units (IMUs), Ultra Low Power Sensors, and wearables (Let us call this category “Extreme Edge”). GPUs are costly and power-hungry. MCUs is all we can get. Forget “MB” and “GB”, think “kB”. Forget “Watts” and think “mWatts”.
The concept has clearly struck a chord and the organization has attracted nearly 100 adopters.
Although the need, use cases, and the good intentions are there; I see the traditional implementations of Deep Learning hardware becoming more power efficient over time and likely to encroach on tinyML’s territory (the Extreme Edge). Additionally, the computational requirements of devices at the Extreme Edge will undoubtedly rise over time and will most likely require more horsepower than what MCUs can muster. I am hoping that history will prove me wrong.