Welcome to the weekly newsletter covering Deep Learning Patterns, Methodology and Strategy. We’ve come up away to organize the topics to appeal to the broadest of audiences. The more general topics always are at the top, while more specialized ones are towards the bottom. We hope that this newsletter we appeal to all those interested in Deep Learning developments.
Recently, I’ve been spending a lot of time thinking about machine learning, and in particular deep learning. But before that, I was mostly concerning myself with quantum computing, and specifically the algorithmic/theory side of quantum computing.
Google is continuing its big push into the world of AI. The Silicon Valley tech giants have gone one step further into leaving the rest of the competition behind when it comes to machine learning by acquiring a company called Kaggle.
Deep learning has revolutionized the world of artificial intelligence. But how much does it improve performance? How have computers gotten better at different tasks over time, since the rise of deep learning?
We show that evolving models that rival large, hand-designed architectures is possible today. We employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions.