Where to begin … Intel or Docker, Docker or Intel? I’m going to go alphabetical and just choose Docker. I’m also going to jump into it without a real segue.
Docker deciding to include Kubernetes as a container orchestration option
on its platform is both very smart and very necessary. It’s a similar situation, more or less, to when Mesosphere made the same decision last month
. Only, Kubernetes has arguably put Docker in an even tougher position, because it’s balancing so many open source and commercial pieces, including its ubiquitous container format. Whereas Mesosphere could at least offer customers a this-or-that choice between DC/OS and Kubernetes, Docker still wants companies to use Docker containers and has to expend a fair amount of resources keeping that open source community going.
The problem is that it’s difficult to make enterprise sales when users want open source at the lower layers and to pay (real money, at least) at layers they deem more strategic. If that’s Kubernetes, then Docker either needs to support it commercially, or let someone else take all the revenue from the orchestration layer up while Docker keeps on spending money to keep the free part of the puzzle chugging. By supporting Kubernetes as part of Docker Enterprise, it now can make the argument that nobody understands containers better than Docker does, and there’s now no real reason to not pay for its enterprise version.
The thing about open source, especially true community projects like Kubernetes, is that pride is often one of the major inhibitors to capitalizing on it. Kudos to Docker for making the smart decision, even if it meant swallowing a little pride and acknowledging that its Swarm orchestration platform simply is not what most users want.
As for Intel’s upcoming Neural Network Processors, which are an attempt to stem the tide of Nvidia GPUs flooding data centers and cloud platforms, I would suggest listening to the recent ARCHITECHT Show podcast interview with Naveen Rao
, who leads Intel’s AI products group. Rao was the co-founder and CEO of Nervana Systems, which Intel acquired last year to drive its development of AI-specific hardware. He speaks a lot about Intel’s broad range of AI hardware platforms (NNP joins its Movidius and Altera lineups, as well as the neuromorphic Loihi chip project
it announced recently) and the strong focus it’s putting on AI software frameworks.
I’ve been saying for a long time that it’s too early to entirely cede the AI space to Nvidia, and today’s new from Intel is more proof of that. The next few years will be really interesting as Intel, Nvidia, Google and a handful of startups fight to own AI from embedded consumer devices all the way up to data centers.
Also, Intel noted that Facebook has helped in developing the NNP architecture, but didn’t go into much detail about what that means. I would bet this has something to do with the fact that Facebook has for years been pushing hardware vendors to design with its needs in mind, and when they haven’t it has responded by designing its own server, storage and networking gear, and then open sourcing it. This might not have been such a big deal 20 years ago, but now that webscale buyers account for such a large percentage of hardware purchases and at least one of them, Google, is building its own AI chips, Intel can’t take shipments for granted.
Facebook knows a lot about AI and what it would like to see in a chip architecture, and will also buy a lot of them, so it’s the kind of company you probably want as a partner in this space.