You can read the article and make your own judgments (and I’d love to hear why I’m wrong), but my basic beef is this: The intelligence part of AI is not just the interface, but also (primarily?) the backend. And most of the backend systems in this particular piece don’t seem very intelligent. In fact, they seem to be doing things we’ve been able to do for years via traditional data analytics and scripts.
That’s all fine and dandy, and companies are free to use them to automate certain aspects of their business processes—but presenting relatively dumb software as AI obscures the fact that researchers are making serious progress on systems that could actually be much more intelligent in roles such as customer service and text analysis. (See, for example, this piece on what Maluuba, now part of Microsoft, is working on
.) Focusing on technologies with limited capabilities gives workers and policymakers a false sense of what will eventually be possible. This could result in ill-formed opinions about actual risk, and short-sighted policy decisions.
On the other hand, focusing on the wrong technologies doesn’t help employers accurately gauge how and when they might optimize their operations with AI. Whether that’s ultimately better or worse for employees remains to be seen—there’s an argument for both—but it would be good for everyone to get a real sense of what’s coming.
A handful of observations from the NYT article:
- Hotel search is pretty much a solved problem, right?
- Identifying correlations like (users of App X will spend more, or prefer this type of hotel) has been possible at least since the advent of Hadoop. For example, this from 2011.
- “Shall” is not a vague term in legal documents. It has very specific meaning: The party does this, or the party has breached the contract or broken the law.
- The real breakthrough in email automation is not suggesting the right reply, but rather answering specific questions that aren’t binary or don’t lend themselves to a website link for more info.
- I’ve been on the receiving end of an x.ai digital assistant for scheduling a meeting. It worked well enough (with maybe one extraneous email) but I actually felt a sense of unease in suspecting it was a bot and not knowing how personable my responses should be.