View profile

Fair Comparison of Performance Metrics in AI Hardware - Issue #17

Revue
 
MLPerf: Real Progress is Being MadeHaving a unified and standard way of measuring the performance of
 
October 15 · Issue #17 · View online
Artificial Intelligence Technology Update
MLPerf: Real Progress is Being Made
Having a unified and standard way of measuring the performance of Deep Learning products can go a long way toward helping customers to choose the products that fit their performance requirements for a specific setting. This is exactly the charter of an industry consortium called MLPerf.
I attended MLPerf’s community meeting last week and was impressed by the work that is being done by the team.
In a way of an introduction, MLPerf is a consortium of vendors with the charter of developing benchmarks for measuring the training and inference performance of machine learning hardware, software, and services. Such benchmarks are invaluable for comparing various products since vendor-specific benchmarks can be biased and highly dependent on arbitrary assumptions. The consortium includes 54 companies and more than 1100 members. Their benchmarks cover broad categories in both training and inference and span throughout various use cases (e.g vision, speech, language, commerce, mobile vision and etc.). All the benchmarks support both training and inference. As an example, an AI accelerator chip vendor wanting to use the training benchmark, will run the benchmark on their platform using standard dataset, the objective being to measure the time needed to train models to reach a target quality metrics (accuracy, F1, BLEU, …).
The nuance this time was launching workgroups to develop benchmarks for power and discussions about covering TinyML.
Good things are being done in this domain.

Tensorflow Estimators
Google’s Tensorflow is the leading open source machine learning development and deployment platform enjoying a brisk growth in its adoption, especially in corporate settings. It is large, diverse, powerful but not easy to use. Its biggest issue, in my view; is its numerous versions and the non-compatibility of the various commands. One can literally spend days in making necessary tweaks to migrate from one version to another. Aside from its weaknesses, one of its most powerful functions are ‘Estimators’. Estimators embody a large collection of attributes, methods that enables the user to build a single object that can be used for training, evaluation and prediction. It offers powerful ways to define how the data flows from input features and labels all the way to the outputs. It offers versatile means of measuring the performance of the model. Last but not least, saving the model and its corresponding parameters becomes a breeze.
Introducing a Conditional Transformer Language Model for Controllable Generation (CTRL)
Eigen Technologies
=========================================================
Hope you have benefited from this issue. Please forward to others if you find value in this content. I always welcome feedback.
Al Gharakhanian
info@cogneefy.com | www | Linkedin | blog | Twitter
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue