Just a heads up that I’ll be in Seattle part of this week at the GeekWire Cloud Tech Summit. This will almost certainly affect the time you receive these emails Tuesday through Thursday.
For many companies, the path to adoption of any new big data technologies runs right through Hadoop and Spark—into which many companies, especially large enterprises, have already invested millions of dollars and man-hours. The popularity of these two technologies can be easy to forget amid all the talk about artificial intelligence, IoT cutting-edge cloud data services, but the reality is that they’re still focal points in so, so many data environments.
Researchers and companies alike are trying their damnedest to create better, faster, easier alternatives, but moving mountains of data and rebuilding pipelines is hard. It’s not like the Hadoop and Spark communities are sitting idle, either. We’ve seen companies in this space embrace the cloud, IoT and deep learning with new products over the past few months (longer, really), and now the respective Apache projects are getting upgrades, as well.
As this handful of items from the past week or so highlights, it’s far too early to move Hadoop and Spark to the dustbin of history: