View profile

AI Scholar Weekly - Issue #52

AI Scholar Weekly - Issue #52
By Educate AI  • Issue #52 • View online
Google Recently Fired Another Ethical AI Expert; Free Webinar on AI/ML; Open Source Platform for Finding the Best ML Models; GAN Paradigm Based on Pure Transformers; A New, Fast and Privacy-Preserving Framework for Distributed ML (and more)

Top AI Research Papers This Week!
https://web.facebook.com/educateai
https://web.facebook.com/educateai
 #1 Google Announces Open Source Platform for Finding the Best ML Models
It’s not easy to design neural networks that can generalize well for various tasks. AutoML tries to help by finding the right neural network for a task without manual experimentation.
Nevertheless, AutoML algorithms demand a lot of computing resources. They also explore domain-specific search spaces that do not transfer well across domains.
To overcome the challenge and extend access to AutoML solutions, Google AI has announced the open-source release of Model Search. This platform helps researchers develop the best ML models efficiently and automatically.
Model Search is based on Tensorflow and built upon previous knowledge for a given domain. According to Google researchers, the framework is powerful enough to create models with state-of-the-art performance on well-studied problems when provided with a search space composed of standard building blocks.
What the results say: Model search was applied to find an architecture suitable for image classification on the CIFAR-10 imaging dataset. Researchers quickly reached a benchmark accuracy of 91.83 in 209 trials, while previous top performers reached the same threshold accuracy in 5807 trials.
Github link
#1 Google Announces Open Source Platform for Finding the Best ML Models
It’s not easy to design neural networks that can generalize well for various tasks. AutoML tries to help by finding the right neural network for a task without manual experimentation.
Nevertheless, AutoML algorithms demand a lot of computing resources. They also explore domain-specific search spaces that do not transfer well across domains.
To overcome the challenge and extend access to AutoML solutions, Google AI has announced the open-source release of Model Search. This platform helps researchers develop the best ML models efficiently and automatically.
Model Search is based on Tensorflow and built upon previous knowledge for a given domain. According to Google researchers, the framework is powerful enough to create models with state-of-the-art performance on well-studied problems when provided with a search space composed of standard building blocks.
What the results say: Model search was applied to find an architecture suitable for image classification on the CIFAR-10 imaging dataset. Researchers quickly reached a benchmark accuracy of 91.83 in 209 trials, while previous top performers reached the same threshold accuracy in 5807 trials.
#2 A Python API for Rapid Machine Learning Model Development
With Python as one of the major programming languages in Machine Learning today, several high-quality and well-maintained open-source Python toolboxes exist.
This paper introduces updates in PHOTONAI Version 3, a high-level Python API for designing and optimizing machine learning pipelines.
The PHOTONAI framework is built to accelerate and simplify the design of machine learning models. It adds an abstraction layer to existing solutions and can simplify, structure, and automate the training, optimization, and testing workflow. 
As a high-level Python Application Programming Interface (API), it can considerably accelerate design iterations and simplify the evaluation of novel analysis pipelines. Besides, it adds several unique and convenient features for machine learning pipeline setup, offers diverse hyperparameter optimization strategies, and automates unbiased model evaluation.
Source code access on Github. Get paper: PDF Format
#3 A New, Fast and Privacy-Preserving Framework for Distributed Machine Learning
How can we offload the training task to a distributed computing platform while maintaining the dataset’s privacy?
For starters, training ML models is challenging due to the typically large volumes of data and model complexity. Secondly, training often involves sensitive data, such as healthcare records, browsing history, or financial transactions, which raises data security and privacy issues. 
For distributed machine learning, this paper presents CodedPrivateML, a fast and scalable privacy-preserving training framework. CodedPrivateML keeps both the data and the model information-theoretically private while allowing efficient parallelization of training across distributed workers. Results: Researchers characterize CodedPrivateML’s privacy threshold and prove its convergence for logistic and linear regression. 
Through extensive experiments on Amazon EC2, CodedPrivateML provides significant speedup over cryptographic approaches based on multi-party computing (MPC). 
#4 Exploiting Video Calls for Keystroke Inference Attacks on Zoom
Video calls have become the new norm for both personal and professional remote communication.
However, if a participant in a video call is not careful, he/she can reveal his/her private information to others in the call.
In this paper, researchers design and evaluate an attack framework to deduce one type of such private information from the video stream of a call – keystrokes, i.e., text typed during the call.
They evaluate the proposed video-based keystroke inference framework using different experimental settings and parameters, including different webcams, video resolutions, keyboards, clothing, and backgrounds.
Their work proposes and evaluates effective mitigation techniques that can automatically protect users when they type during a video call and more.
#5 A New GAN Paradigm Based on Pure Transformers
Can two transformers make one strong GAN? This research aims for the first pilot study to build a GAN completely free of convolutions, using only pure transformer-based architecture.
Researchers present a comprehensible set of efforts and innovations towards building pure transformer-based GAN architectures, dubbed TransGAN.
They carefully crafted the architectures and thoughtfully designed training techniques. The pure transformer-based architecture brings versatility to TransGAN.
As a result, TransGAN has achieved comparable performance to some state-of-the-art CNN-based GAN methods across multiple popular datasets.
Still, there is a large room for TransGAN to improve further, before it can outperform the best hand-designed GANs with more margins
The code is available on Github. Read the Paper: TransGAN ~ Two Transformers Can Make One Strong GAN
Other Great AI Papers
Data analytics and machine learning methods, techniques, and tool for model-driven engineering of smart IoT services. Read the full paper
Research on bias in Machine Learning has focused on two issues; how to measure bias and how to ensure fairness. This research paper examines the contribution of the classifier algorithm to bias. Read it here
Using machine learning for detection of hate speech and offensive code-mixed social media text. Read more
Researchers propose UniT, a Unified Transformer model to simultaneously learn the most prominent tasks across different domains, ranging from object detection to language understanding and multimodal reasoning in this paper
Mine Your Own vieW (MYOW) is a new approach for finding samples within the dataset that can serve as positive examples for one another. Read more
AI Resources
How to use machine learning and artificial intelligence to reduce operations and maintenance costs in vegetation management FREE WEBINAR
I created a 40,000 labeled audio dataset in 4 hours of work. Here’s How I Did It
Top AI News
Google fires a second top AI ethics researcher. Read story
Engineers at the University of California San Diego have created a four-legged soft robot that doesn’t need any electronics to work. Electronics free, air-powered robot. See demo video
About AI Scholar
Thanks for reading. Create a ripple effect by sharing this AI Scholar Newsletter with someone else, and they will also be lit up!
If you have suggestions, comments, or other thoughts, we would love to hear from you, email me at chris@educateai.org, tweet at @cdossman, like us on Facebook, or connect with me on LinkedIn
Did you enjoy this issue?
Educate AI

AI Scholar Weekly brings you everything new and exciting in the world of Artificial Intelligence, Machine Learning, and Deep Learning every week for free.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue