View profile

AI Scholar Weekly - Issue #50

AI Scholar Weekly - Issue #50
By Educate AI  • Issue #50 • View online
Why Should We Trust Decisions Made by a Neural Network? Top 6 Errors Machine Learning Engineers Make; How to Create a Game Character Face from a Single Portrait; 27 Best Resources to Study Machine Learning (and more).

Top AI Research This Week!
Image by Gerd Altmann.
Image by Gerd Altmann.
 #1 A Different Benchmark for Evaluating Progress in NLG
This research introduces GEM (Generation, Evaluation, and Metrics), a benchmark for measuring progress in natural language generation (NLG).
NLG is a niche interest, and advances in the field will become increasingly diversified to provide effective communication between humans and computers in a natural fashion that none of us ever thought possible. Models such as GPT-3 demonstrate a lot of potential in creating robust applications.
GEM provides an environment in which models can easily be applied to a broad set of corpora and tested evaluation strategies.
Specifically, GEM aims to help researchers measure NLG progress across 13 datasets spanning many NLG tasks and languages. The benchmark also aims to provide in-depth analysis of data and models presented via data statements and challenge sets and develop standards for evaluation of generated text using both automated and human metrics, enabling research on a wide range of NLG challenges.
As models improve, we need to evaluate them on more challenging datasets instead of moving sideways on old ones. GEM aims to provide this environment for natural language generation.
Why GEM is useful: It will help you test language models. The research team also plans to organize a shared task at the ACL 2021 Workshop and invite the entire NLG community to participate.
#2 New Google Research Shows ML Can Facilitate Architecture Exploration and Suggests High-performing Architectures
 Recent progress in ML advancements has been spurred by custom accelerators, including Google TPUs and Edge TPUs. Their developments have improved neural network training and inference performance, boosting new possibilities in a wide range of AI and ML applications. 
Researchers and engineers must continue to innovate, enhance architecture designs, computing power, and adapt to rapidly evolving ML models and applications. Innovations and new designs will also unravel new capabilities. 
This paper is about: Google AI presenting their research on ML-driven design of custom accelerators. They propose a transferable architecture exploration framework, dubbed Apollo that leverages recent advances in black-box function optimization for sample-efficient accelerator design. Apollo facilitates the design of new accelerators with different design constraints by leveraging transfer learning. This encouraging outcome portrays a promising path forward to facilitate generating higher quality accelerators.
It matters because: It is an exciting path forward to further explore ML-driven techniques for architecture design and co-optimization across the computing stack to invent efficient accelerators with new capabilities for the next generation of applications, Google says. Please read about it on their blog.
#3 Create a Game Character Face from a Single Portrait
Many deep learning-based 3D face reconstruction methods have been proposed recently; however, few have applications in games. 
In this paper, scholars present a new method for the automatic creation of game character faces. The technique is an automatic character face creation method that predicts both facial shape and texture from a single portrait, and it can be integrated into most existing 3D games. 
Although 3D Morphable Face Model (3DMM) based methods can restore accurate 3D faces from single images, the topology of 3DMM mesh is different from the meshes used in most games.
Why it matters: The proposed method is a low-cost alternative to generate data needed for training. Another benefit is that experiments to test the approach demonstrate that it outperforms the existing state-of-the-art methods applied in games. 
#4 Google AI: A Simple Method to Estimate Data Training Influence 
 A model’s training data can have a significant impact on its performance. The notion of influence, the degree to which a given training example affects the model and its predictive performance, is used to measure data quality. However, it is challenging to quantify influence.  
Google Research:  has studied the problem of identifying the influence of training examples on predicting a test example and proposed a method called TracIn to determine the influence of a training data point on a test point. 
The TracIn method: is simple, a feature that distinguishes it from previous approaches. Implementing TracIn only requires only a basic understanding of standard machine learning concepts like gradients, checkpoints, and loss functions. 
TracIn is general and versatile. TracIn also applies to any machine learning model trained using stochastic gradient descent or a variant of it, agnostic of architecture, domain, and task. Researchers, however, note that some human judgment is required to apply TracIn correctly. Read more: Estimating Training Data Influence by Tracing Gradient Descent.
#5 Why Should We Trust Decisions Made by a Neural Network?
Advanced machine learning techniques have strongly influenced society’s aspects by taking over human roles in various complicated tasks. Critical decisions, including medical diagnosis, are being made based on machine learning models’ predictions with limited human intervention or supervision.
But what happens inside a DNN? What does each layer of a deep net architecture do? What features does a DNN looking for?
There’s are no sufficient answers to these questions as ML algorithms do not provide adequate clues on their internal processing actions. Yet, it is of paramount importance to understand, trust, and “explain” the argument behind deep models’ decisions.
The situation has compelled interests in Explainable Artificial Intelligence (XAI), which has motivated scientists to research this area. 
Towards this effort: A group of researchers presents the necessity of explaining, visualization, and understanding Deep Neural Networks (DNNs), especially in complex machine learning and critical computer vision tasks in this paper. 
They review the state-of-the-art interpreting and explaining solutions in three main categories (structural analysis, behavioral analysis, and explainability by design) to provide a comprehensive overview on understanding, visualization, and explanation of the internal and overall behavior of DNNs. Get full paper: An Explanation of Deep Neural Networks
Other Great AI Research Papers
Deep RL has recently demonstrated promise in enabling physical robots to learn complex skills in the real world. Researchers have released a resource both for roboticists and machine learning researchers who are interested in furthering the progress of deep RL in the real world through several case studies that are not often the focus of mainstream RL research. Lessons We’ve Learned
This research highlights the progress and challenges in current drug development studies using both artificial intelligence and real-world data. Read more
Transformers are powerful neural architectures that allow integrating different modalities using attention mechanisms. In this paper, researchers leverage the neural transformer architectures for multi-channel speech recognition systems, where the spectral and spatial information collected from different microphones are integrated using attention layers. Read more
A new neural network-based approach for the solid texture synthesis based on generative adversarial networks, namely STS-GAN, in which the generator composed of multi-scale modules learns the internal distribution of 2D exemplar and further extends it to a 3D solid texture. Read more
Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a grand challenge. In this survey, scholars summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. Read more
AI Resources
Artificial Intelligence For Good: How AI Is Helping Humanity
Here are the Top 6 Errors Novice Machine Learning Engineers Make. Click to read
Are you in a Machine Learning tribe? Learn what a ML tribe is, and how you can find yourself one. Read more
27 Best Resources to Study Machine Learning. See here
Top AI News
AI Tool Emerges to Accelerate COVID-19 Vaccines that Battle New Virus Mutations. Read full story
‘Audeo’ teaches artificial intelligence to play the piano. Read more
AI Scholar Weekly
Thanks for reading. Create a ripple effect by sharing this AI Scholar Newsletter with someone else, and they will also be lit up!
If you have suggestions, comments, or other thoughts, we would love to hear from you, email me at chris@educateai.org, tweet at @cdossman, like us on Facebook, or connect with me on LinkedIn
Did you enjoy this issue?
Educate AI

AI Scholar Weekly brings you everything new and exciting in the world of Artificial Intelligence, Machine Learning, and Deep Learning every week for free.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue