On a pretty slow news day, here are the three things that really stuck out to me:
1. Redis Labs raises $44 million:Redis is, of course, a very popular in-memory, open source, key-value database. And Redis Labs appears to have a strong business selling enterprise versions of the software, as well as running a managed cloud service. I like the way the company handles its open source business model, by building a relatively simple technology that can grow a large user base, and then not hesitating to productize it.
By the way, it seems almost like a forgone conclusion that all database startups will eventually offer managed cloud versions now—although that certainly was not the case years ago. I’m curious how much strain it puts on smaller companies to maintain and support a technology across open source, enterprise and cloud versions.
2. Facebook is getting flak for its React license: It’s tough to tell if this is much ado about nothing, but back in July the Apache Software Foundation blacklisted Facebook’s React license (which is called BSD + patents) because ASF didn’t like its terms, which allow Facebook to revoke users’ rights to the React patents if they file a patent claim against Facebook. The patent claim does not have to relate to React. And by “blacklisted,” I mean disallowed ASF projects from including dependencies on technologies using that license.
Facebook blogged about the ASF’s decision on Friday, which led to several blog posts, including this really good one explaining what’s up and why it matters. Essentially, it’s a question of whether Facebook is right to try and protect itself from patent lawsuits, whether this is the best way to do it, and then whether the mass adoption of this practice would kill open source. It’s not hard to see how it could at least chill adoption and encourage large companies to just build their own software (or, perhaps, buy it).
3. Facebook details its edge networking infrastructure: Speaking of Facebook, it knows a thing or two about infrastructure when it’s no busy upsetting open source users ;-) And on Monday it published a blog post (and an academic paper) discussing how its Edge Fabric manages traffic flow from users’ devices to Facebook’s data centers.
Obviously, Facebook’s traffic demands are beyond what most companies will ever face, and content networks are different than computing platforms, but this an interesting read nonetheless. Other companies will have to start thinking about their own edge infrastructure, too—especially if they plan on getting into latency-sensitive areas like IoT and AI. Most won’t build anything on par with what Facebook is doing, but there are interesting options emerging from cloud providers, telcos and even startups, like Vapor.
In this episode of the ARCHITECHT AI Show, Derrick Harris speaks with Jeremy Howard and Rachel Thomas of Fast.ai, where they teach popular online courses aimed to get students up and running with deep learning. Among other things, Howard and Thomas discuss the promise of deep learning and early student successes (including Hot Dog, Not Hot Dog app from Silicon Valley), as well as the threat of job losses from AI and how seriously we should take Elon Musk’s AI warnings.
The company makes a deep learning chip that it wants to embed in all sorts of endpoints. This is such a hot space, but you have to wonder how many new chips the market will really be willing/able to absorb.
The company, Scyfer, spun of of the University of Amsterdam, and focuses on building AI and deep learning technologies for industrial clients. Seems like a good buy for Qualcomm, which might want to see its chips make their way into more than smartphones.
That’s right: the narrative on automation is shifting from a future where there are no jobs for humans to a future where humans are just paid poorly. This can happen because the alternative is automation, but also because humans are forced out the the factory, say, and into the service industry. This post on the topic from Nick Carr is good, too.
A group of Chinese internet companies and investors has created the AI Challenger competition, which will feature three massive datasets across different categories and is expected to keep expanding. On the one hand, more data is great for researchers. On the other hand, some folks have expressed concern that China has lax rules about data privacy, which will be a bad thing for citizens if China becomes an AI superpower. If that’s your bag, you could look at this as a corollary.
This is an interesting read on scientists’ reactions to a recent Intel event highlighting work with NASA. In some areas, deep learning is really useful (i.e., identifying moon craters to save rovers) but in other areas (i.e., meteor detection) scientists want more proof and more details.
It’s almost getting repetitive to share these stories, because there seems to be a new one or two every week. But being able to diagnose disease or predict patients’ needs or relapses is a huge deal (and even huger once it’s out of the lab and into the hospital).
You could also call this “private cloud” software, IMHO. It’s about pooling servers to create shared compute and storage pools. It seems like a less-container-centric approach to what companies like Mesosphere and Portworx are doing.
Thanks mostly to containers. The idea behind this framework, called Paracloud, is that applications and infrastructure should now be able to speak to each and automate scaling, load balancing and workload migration.
This isn’t exactly about scaling infrastructure, but rather about the patterns we see as cities, animals and other things scale. I saw the author of this book present on this work years ago, and it was equally fascinating then.
ARCHITECHT delivers the most interesting news and information about the business impacts of cloud computing, artificial intelligence, and other trends reshaping enterprise IT. Curated by Derrick Harris.