As usual after some time off, I felt obligated to collect a bunch of stuff I missed last week rather than sticking to today’s news. So … enjoy a lot of (I think) really interesting items from the past 6 days with their original headlines and absent my annotations.
Also, before I left I promised you some notes from the O'Reilly AI conference
last week. Here they are, loosely organized and based on the sessions I was able to attend (also, you can access at least highlights from the keynotes on the website):
A general observation on enterprise AI companies
The exhibitor list was much shorter than other O'Reilly events, including the Strata
big data conference happening this week in New York. One reason for that might actually be the overlap in themes between the two conferences, with many sponsors opting for the larger, more broadly focused and enterprise-y event.
However, an alternate explanation is that enterprise artificial intelligence is still quite a small market compared with the broader market for data management (which, if we’re being honest, is where a lot of horizontal AI plays might fairly be lumped). If companies going are to sell AI software and services, perhaps they figure they’re more likely to do so targeting existing audiences for big data and data science products than targeting folks concerned with AI research. I also suspect a lot of the really cool companies doing AI are building applications for specific markets, which would make industry-specific trade shows a more compelling place to peddle their wares.
Andrew Ng on making money in AI
Andrew Ng gave a really good keynote targeting people wondering how to actually make money in AI today (at least, that was the message I focused on). Among his insights:
- Today, the money is in supervised learning, structured data, and ads! He’s spoken before about the revenue Baidu was seeing (even a couple years ago) using deep learning for ad targeting, and while it’s not the sexiest use case, it’s certainly a prudent one.
- Deep reinforcement learning has huge potential, but it’s even more hungry than traditional supervised deep learning techniques (e.g., convolution neural networks for image recognition). Ng said he thinks of building a business around AI as a multi-year “chess game,” where the goal is to strategically gather the best data, build models, build the product, and then learn even more from user data—ultimately creating a virtuous cycle.
- There are emerging techniques that don’t require tons of data, but they’re still relatively young and less promising/proven at the moment than deep learning.
- These four things make your company a true AI company rather than just a company that does some AI (he drew an analogy to how Amazon is a web company, whereas a store with a website is just a store with a website):
- Strategic data acquisition
- Centralized data warehouse
- Pervasive automation
- New job descriptions (for example, product managers and engineers working more collaboratively because the interplay between data, algorithms, UI, etc., is so tight in AI that traditional workflows won’t work)
Google’s Jia Li on democratizing AI
Speaking of industry-specific applications of AI, I think Jia Li (head of R&D for the Google Cloud ML team) nailed the rationale for democratizing AI techniques beyond web and mobile platforms. Basically, she acknowledged, there are industries such as health care, agriculture, and education that can benefit from and, “Those of us working in AI know very little about them.”
Steve Jurvetson on the Moore’s Law that really matters
Steve Jurvetson of DFJ gave a fairly long talk and then interviewed Intel AI head Naveen Rao. Really, though, you could boil his whole talk down to this chart that he presented
. The image is inserted below but, if you don’t want to wait, the brief explanation is that rather than computing density, it tracks the cost of computation over time—a metric he concludes is still improving exponentially even as our hardware platforms have changed (i.e., from tabulator machines to GPUs).
Basically, Jurvetson argues (and he concedes he borrowed this idea from Ray Kurzweil) that technologies such as neuromorphic computing, quantum computing and whatever customized ASICs appear won’t technically advance Moore’s Law, but will advance the more important metric of cost/computation. I think that’s a good way to think about all the research and advances we’re seeing in these spaces, and why we’re seeing so much investment in them.
How to sell AI to your company, your partners and your customers
CEO Jana Eggers gave a good, practical talk on some of the cultural and psychological hurdles companies will face in trying to deliver AI products. Here are some of my takeaways, which run the gamut from dealing with your co-workers to communicating with customers:
- Different people and departments own different data, and they are motivated by different things.
- Sometimes people or teams don’t want to share their data because they’re embarrassed by the state it’s in (e.g., it’s not clean, not organized and they don’t even know what’s there).
- Forcing “human-like” AI can backfire, especially when AI that acts like AI works perfectly fine. Eggers gave the example of how a sheep dog is very good at herding sheep—probably better than, but certainly much different than a human at that job—but would look unnatural if we somehow forced one to ride a bike. (I think another good analogy here is the way food companies are determined to make vegetarian “meat"—which sets itself up for unfavorable comparisons—rather than just embracing the fact that vegetarian products are not meat.)
- Companies should prepare for some bad outcomes (e.g., unexpected product recommendations) and some growing pains, but keep focused on the fact that AI done right will deliver better results overall.
- Optimize the benefits of your product strongly enough to overcome the downsides. Her example was how Roombas are known to spread dog poop around a house, but do well enough otherwise and save users enough time that they’re willing to overlook (or live with) the poop situation.
- Raised expectations = chronic underdelivery
Subject-matter experts are you friends in industrial AI
CEO Mark Hammond (yes, Bonsai is a sponsor) had some good insights on delivering AI to industrial customers. Some of it was pretty specific to Bonsai’s reinforcement-learning approach, but I would boil the message down to this: You should absolutely take advantage of and integrate with the software products customers already use (in this case, simulation programs) and use customers’ subject-matter (or machine-operating) experts to train systems. Not doing these things involves an awful lot of reinventing the wheel.
However, he noted, the one potential downside to relying too much on human knowledge is that the models might not discover new, novel ways of solving a problem. So customers need to consider how much of the goal is to automate a process, and how much is to remake a process.