On a relatively slow news day (in tech, at least), here are a handful of the headlines that caught my eye, spanning from deeper analysis of Microsoft’s cloud revenue to more proof that artificial intelligence is set to have a big impact on how doctors diagnose diseases:
Analysts say Azure is still a small part of Microsoft’s overall cloud revenue: I think we all kind of knew this, but here are stories quoting multiple analysts (IT and financial) noting that Microsoft’s $20 billion cloud run rate is largely attributable to its SaaS products rather than anything Azure-related. That being said, the 90 percent growth Microsoft reported for Azure suggests this might not be the case for long.
AT&T is partnering with the Linux Foundation on a project to share AI models:
Details are kind of vague at the moment but the project, called Acumos, appears to be centered around models and applications. AT&T describes it like this
It’s an extensible framework for machine learning solutions. It provides the capability to edit, integrate, compose, package, train and deploy AI microservices. Simply put, it’s an AI marketplace where applications can be chained to create complex and sophisticated AI services.
Naturally, there’s a strong network component to this, with AT&T claiming it wants to build a network standard for piecing together various components into a single application.
You can see why it’s interested in this space, as AT&T probably wants a bigger piece of the AI pie than simply moving packets across its network. A successful open source project would give AT&T standards a central role in communications for things like connected cars and IoT, and would also make it easier for AT&T to sell higher-level support and services around Acumos. (This WIRED story
has some more thoughts on how AT&T stands to benefit from Acumos.)
You have to laud AT&T for embracing open source over the past year or so, but it doesn’t exactly have the history or experience to suggest it’s ready to drive such a large effort in such a big space. I’m also curious to see whether, or how, companies like Amazon and Google, which own so much of the rest of the AI tech stack, will cooperate with AT&T on this.
AI is tackling new disease diagnosis, and is also getting easier to use: Stories about deep learning models accurately diagnosing diseases are nothing new, but every now and then it’s good to highlight some new use cases.
Today, it’s a Japanese university claiming real-time diagnosis of colorectal cancer
during a colonoscopy. The researchers claim an 86 percent accuracy rate at the moment, which is obviously good, but I don’t know how high that number would have to be before a technique like this is approved for real-world use. The trade-off – as with many things in today’s connected society – is between fast results and accurate results.
Other research that made news today involves researchers training a model that can detect suicidal tendencies
by analyzing data during brain scans. Here, too, we have a potentially amazing application; the challenges are getting high enough accuracy (perhaps with confidence intervals) and, for practitioners, figuring out how to act upon the diagnoses the system is making.
Finally, there’s news that the smartphone-based, deep-learning-powered ultrasound device by Butterfly Network (a) works and (b) is nearly set to go on sale for $1,999.
I profiled Butterfly back in 2014
and the company’s founder was making some big claims. Perhaps he’s about to live up to them, and once-expensive procedures will become a lot more common and a lot more affordable.