I’m back from break and ready to rock! Fair warning, though: This issue is long, not entirely chronological, and very limited comments on the items in the body (mostly to clarify vague headlines). Rather than ignore the last 2 weeks worth of news, I opted to read through what I missed and include the most interesting stuff. That did, however, cut into my writing and organizing time ;-)
There are several things really stuck out to me though, which I’ll highlight here:
AWS joins the CNCF, and announces a slew of new products
Cockroft’s post offers up some context as to why AWS joined, and James Governor at Redmonk has a nice analysis of the whys of this decision
, too—including the notion that AWS already runs the lion’s share of Kubernetes deployments, so supporting a technology that drives resource consumption is a smart business decision.
But I think what’s most interesting about the AWS-CNCF announcement is what it doesn’t
say. Specifically, both Cockroft’s post and the CNCF press release don’t say anything about AWS building a Kubernetes-based service (which has been rumored recently
), and don’t actually talk too much about Kubernetes at all. And while Cockroft suggests plans to release open source projects to the CNCF, AWS has not released anything yet.
Considering its dominance in nearly all other facets of cloud computing (including, one could argue, serverless computing) I suspect Microsoft and Google were very happy to see AWS playing catch-up on containers and being branded as the less-open option. So I also suspect they’re waiting with bated breath to see how AWS will execute on its burgeoning open source strategy. The last thing anybody competing with AWS wants is for it to supplement its reputation for efficiency and scale with openness, as well.
AWS also made a bunch of non-container announcements on Monday, further proof that it’s still the cloud provider to beat, but also that even it can’t afford to rest on its laurels:
Speaking of containers and open source …
That being said, Docker is still very involved, very important, and has a huge user base from which it can mine customers or grow deal sizes. So a 30 percent increase from its $1 billion valuation in 2015 seems fair.
People have been wondering since March, when Andrew Ng left his chief scientist role at Baidu
, what he would do next. He announced the first of his three new ventures last Tuesday: “deeplearning.ai
, a project dedicated to disseminating AI knowledge.” A major part of this is a new series of deep learning courses on Coursera (which Ng also co-founded), which received good reviews
from at least one student who successfully completed all of them. (There’s also a good explanation in the review about how the Coursera program differs from the fast.ai
program for learning deep learning.)
Ng also released a series of video interviews on YouTube
, under the heading Heroes of Deep Learning. His interviewees are Geoff Hinton (Google / University of Toronto), Yoshua Bengio (Element AI / University of Montreal), Ian Goodfellow (Google), Andrej Karpathy (Tesla), Pieter Abbeel (University of California, Berkeley), Ruslan Salakhutdinov (Apple) and Yuanqing Lin (Baidu). I haven’t had time to watch them yet, but I definitely plan to because these are some of the biggest names in the space—Ng included—and I suspect, based on my time chatting with Ng, that he’s a very good interviewer.
OpenAI “masters” Dota 2, or did it: A case study in AI hype and Elon Musk’s AI tweets
I openly admit that I have never heard of the online game Dota 2
until I read about how AI research organization OpenAI built a system that beat some of the world’s best players
in a competition held last week. Nonetheless, it’s still a notable accomplishment because Dota 2, like other online strategy games, represents new challenges for AI systems over board games like chess and Go, or even simple video games. Mostly, this is because games like Dota 2 are fast-moving and neither party has access to all the information about the other player’s situation or how they might react.
When we think about AI systems that will help us solve truly complex business problems, or interact naturally with humans in homes or factories, these are the types of AI techniques that could help us get there. (In other news, both DeepMind and Facebook released tools and data last week to help researchers trying to crack the StarCraft games.)
Or, as OpenAI explains:
Dota 1v1 is a complex game with hidden information. Agents must learn to plan, attack, trick, and deceive their opponents. The correlation between player skill and actions-per-minute is not strong, and in fact, our AI’s actions-per-minute are comparable to that of an average human player.
Success in Dota requires players to develop intuitions about their opponents and plan accordingly. … [O]ur bot has learned — entirely via self-play — to predict where other players will move, to improvise in response to unfamiliar situations, and how to influence the other player’s allied units to help it succeed.
However, at least one expert explained in a blog post
that a 1-on-1 game of Dota 2 is far less complex than a 5-on-5 game (which is a popular format) and that OpenAI’s system might have had access to more information that did the human players. In an interview with The Verge
, one of the OpenAI researchers acknowledged there’s some truth to the criticism, but also defended the work and said the organization plans to tackle a 5-on-5 game at next year’s tournament.