Four years ago, most of the world (even in Silicon Valley) hadn’t heard of deep learning and had no idea the artificial intelligence revolution it would come to drive. Today, it’s powering a not-significant percentage of our consumer experiences and might even be a savior for the wearable market.
At least, that’s my takeaway after reading yesterday about the AI (specifically, natural language processing) capabilities Google has built into Android Wear 2.0
. It’s not even that I think smartwatches and “smart messaging” are particularly exciting (like a true curmudgeon, I still prefer analog watches and manually typed text messages), but the direction in which we’re heading is. If we’re running NLP models on devices as small as smartwatches today, imagine what our smartphones, home automation hubs and, let’s not forget, civic and scientific sensors will be capable of in the not-too-distant future.
Another story from yesterday that got a lot less press than Android Wear, but highlights what I’m talking about, is the new computer vision division
of police-department outfitter Taser. It acquired a startup called Dextro (who I covered back in 2014
, if you want some background) and the computer vision team from wearable provider Misfit in order to form a new division called Axon AI. While Dextro focused on making video searchable, and Taser is talking a lot about video analysis, it’s not difficult to imagine a future in which police body cameras and even consumer wearables are able to do advanced computer vision in real time—with or without a cloud connection.
Of course, wearable technologies are really just a subsection of the Internet of Things, which is also largely powered by deep learning (at least if devices aim to do anything useful). And IoT is driving investment in edge computing (aka fog computing) to ensure no device is ever without low-latency access to extra processing power or storage capacity.
Finally, on a related note, I recently covered research
into a framework that can let groups of consumer devices actually train
deep learning models. Other scientists suggest smartphones could benefit from advances in quantum computing. IBM is apparently showing off promising results
from its brain-inspired TrueNorth chip.
Basically, our stuff is getting really smart, really fast, and in many cases we won’t even need to worry about having an internet connection to use it.