You can’t have escaped the bot-news frenzy of the past few weeks. Bots are hot. The syzygy driving it is as follows:
App store distribution is too expensive; humans are getting used to natural interfaces & chat; our attention is dispersed, and we live in messenger interfaces; AI systems bring bots closer to humanity and further from inanity, especially in narrow domains; micro-service based architectures make integrations easier.
So we can see a value architecture emerging. Distribution (owned, or rented from Facebook, Kik, Skype, etc) | Translation layer (where the intelligence lies, owned or rented from Msg.Ai, Wit.Ai, etc) | Interface layer to your systems (with your business logic) | Training tools (your own or rented from Crowdflower, etc) | Analytics and monitoring.
The big players will attempt to encapsulate all of these areas, and probably give the tools away for free if they can. By using their NLP layer or image classifier you will train them better and better, building them a data/inference-model moat. It’s probably too tempting (from a time-to-market standpoint) & too easy (from a time-to-quality perspective) to avoid having your distributor, e.g. Microsoft or Facebook, do the whole lot for you.
If you go down this road, then remember the general rule of network platforms, and for recent history the treatment of paid likes by Facebook, or the fate of many an ISV on the Dos or early Windows platforms. There are many reasons to follow the new KISS. (Keep it Separate, Stupid).
If you are building a bot-based service, a bot intelligence layer, a bot development toolkit or have a better-than-human bot working in a domain, I’d love to know. Just hit reply (and change the subject line!)
Smarter thoughts on bots more generally & AI this week:
“Telling robots how and when – and why – to disobey is far easier said than done” In order to obey an order, robots need to know how to disobey it. How do you teach them
? (See also: how we need to establish codes of behaviour
for bots. This touches on one of the issues I realised using X.AI’s Amy: are anthropomorphic interfaces reasonable when one participant doesn’t realise they are just dealing with code not a human with agency?)