View profile

AI Scholar Weekly - Issue #39

AI Scholar Weekly - Issue #39
By Educate AI  • Issue #39 • View online
AI job trends important to watch in 2021; 83% ImageNet Top-1 Accuracy in 1 Hour; Personalized open-domain conversational bot; RL with Videos; Google Launches New AI Tools for Healthcare (and more)

AI Success Cases
Storebrand achieved a significant ROI and experienced a 162% increase in customers engagement in just 4 months after implementing an AI agent  
Goal: Increase capacity and automate responding to an ever-growing number of customer queries without the need to ramp up any additional support staff.
The Challenge: Insurance is often thought of by consumers as an important but decidedly unexciting industry. It’s common to hear customers complaining of having to wade through mountains of complex jargon and legalese in order to file claims or simply find answers to their questions. And who can blame them? The last thing anyone wants to do in a time of crisis is searching helplessly through a website or sitting on hold over the phone for several minutes - or even hours - on end.
Storebrand, an insurance company, was able to develop a virtual insurance agent to complement their existing customer service channels, one that goes far beyond the capabilities of an ordinary chatbot. Still in its early stages, the virtual insurance agent is doing the equivalent work of eight full-time human employees and is able to respond to questions on over 1 900 topics. 
Results: In implementing a virtual insurance agent, Storebrand was not only looking to shed their industry’s stiff, conservative image, but also to increase its capacity and automate responding to an ever-growing number of customer queries without the need to ramp up any additional support staff. Here are the results in numbers;  
  • 162% increase in customer interactions through chat
  • 2 111 conversations handled each week
  • 70% of conversations successfully handled without the need of any human support
Don’t hesitate! If you have a similar business need or want similar success, Sign up for a 30-minute consultation and learn how AI can upgrade your business today
Top AI Research This Week
https://www.facebook.com/educateai/?modal=admin_todo_tour
https://www.facebook.com/educateai/?modal=admin_todo_tour
#1  Training EfficientNets at Supercomputer Scale: 83% ImageNet Top-1 Accuracy in One Hour 
As ML models have gotten larger, so has the need for increased computational power.
Large clusters of specialized hardware accelerators such as GPUs and TPUs can currently provide computations on the order of petaFLOPS and have allowed researchers to accelerate training time dramatically. For example, the commonly used ResNet-50 image classification model can be trained on ImageNet in 67 seconds on 2048 TPU cores, a substantial improvement from a typical training time taking on the order of hours. To accelerate the training of ML models with petascale computing, large-scale learning techniques, and specialized systems optimizations are necessary. 
In this paper, a collaboration of researchers with Google Research, University of California, Berkeley, and the National University of Singapore explore techniques to scale up the training of EfficientNets on TPU-v3 Pods with 2048 cores, motivated by speedups that can be achieved when training at such scales. 
EfficientNets are a family of state-of-the-art image classification models based on efficiently scaled convolutional neural networks. Presently, EfficientNets can take on the order of days to train; for example, training an EfficientNet-B0 model takes 23 hours on a Cloud TPU v2-8 node.
The work discusses optimizations required to scale training to a batch size of 65536 on 1024 TPU-v3 cores, such as selecting large batch optimizers and learning rate schedules and utilizing distributed evaluation batch normalization techniques. Additionally, the research presents timing and performance benchmarks for EfficientNet models trained on the ImageNet dataset to analyze the behavior of EfficientNets at scale. With the optimizations, researchers were able to train EfficientNet on ImageNet to an accuracy of 83% in 1 hour and 4 minutes. Read more: Training EfficientNets at Supercomputer Scale: 83% ImageNet Top-1 Accuracy in One Hour.
#2 A New Image Generation Method for Interactive Portrait Hair Manipulation
Despite the recent success of face image generation with GANs, conditional hair editing remains challenging due to its geometry and appearance’s underexplored complexity. 
This recently released paper introduces MichiGAN (Multi-Input-Conditioned Hair Image GAN), a new conditional image generation method for interactive portrait hair manipulation. 
Researchers disentangle hair into four orthogonal attributes to provide user control over every significant hair visual factor, including shape, structure, appearance, and background. For each of these attributes, they design a corresponding condition module to represent, process, and convert user inputs and modulate the image generation pipeline in ways that respect the natures of different visual attributes. All these condition modules are integrated with the backbone generator to form the final end-to-end network, allowing fully-conditioned hair generation from multiple user inputs. Furthermore, they build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs such as painted masks, guiding strokes, or reference photos to well-defined condition representations. Through extensive experiments and evaluations, the suggested method demonstrates superiority in both results quality and user controllability. The code is available on Github. 
#3  Google AI: Announcing the Objectron Dataset
State of the art in machine learning (ML) has achieved exceptional accuracy on many computer vision tasks solely by training models on photos. Building upon these successes and advancing 3D object understanding has excellent potential to power a more comprehensive range of applications, including robotics, augmented reality, autonomy, and image retrieval. 
For example, earlier this year, Google AI released MediaPipe Objectron, a set of real-time 3D object detection models designed for mobile devices, which were trained on a fully annotated, real-world 3D dataset that can predict objects’ 3D bounding boxes. 
Yet, understanding objects in 3D remains a challenging task due to the lack of large real-world datasets compared to 2D tasks such as ImageNet, COCO, and Open Images. 
To empower the research community for continued advancement in 3D object understanding, there is a strong need for the release of object-centric video datasets, which capture more of the 3D structure of an object while matching the data format used for many vision tasks to aid in the training and benchmarking of ML models.
Google is now excited to release the Objectron dataset, a collection of short, object-centric video clips capturing a broader set of everyday objects from different angles. Each video clip is accompanied by AR session metadata that includes camera poses and sparse point-clouds. The data also contains manually annotated 3D bounding boxes for each object, which describe the object’s position, orientation, and dimensions. The dataset consists of 15K annotated video clips supplemented with over 4M annotated images collected from a geo-diverse sample.
To encourage researchers and developers to experiment and prototype based on their pipeline, Google AI has released its on-device ML pipeline in MediaPipe, including an end-to-end demo mobile application and trained models. They hope that sharing their solution with the comprehensive research and development community will stimulate new use cases, new applications, and new research efforts. Read more on the Google AI blog. 
#4  3D Generative Model for Robot Gripper Form Design
This paper proposes Fit2Form, a new 3D Generative Design framework that leverages data-driven algorithms to automate the robot hardware design process. The goal of this work is to use machine learning algorithms to automate the design of task-specific gripper fingers. Experiments demonstrate that the proposed algorithm can generate finger shapes for a target object that achieves more stable and robust grasps. 
The design objectives were achieved by training a Fitness network to predict their values for pairs of gripper fingers and their corresponding grasp objects. This Fitness network then provides supervision to a 3D Generative network that produces a pair of 3D finger geometries for the target grasp object. Experiments demonstrate that the proposed 3D generative design framework generates parallel jaw gripper finger shapes that achieve more stable and robust grasps compared to other general-purpose and task-specific gripper design algorithms. 
However, since the proposed  algorithm focuses on geometry optimization, it does not optimize for the gripper strength, kinematic structure and grasping policy. The algorithm also does not explicitly reason about objects’ physical material properties such as their friction and deformation. 
Researchers would also like to study how to apply this generative design approach to other design tasks beyond robot gripper such as furniture or tool design, where the object’s functionality is strongly influenced by their 3D geometry. Watch  video here  Read more: 3D Generative Model for Robot Gripper Form Design with Fit2Form
#5 Graph Kernels: State-of-the-Art and Future Challenges
Graph-structured data are an integral part of many application domains, including chemoinformatics, computational biology, neuroimaging, and social network analysis. 
Over the last two decades, numerous graph kernels, i.e. kernel functions between graphs, have been proposed to solve the problem of assessing the similarity between graphs, thereby making it possible to perform predictions in both classification and regression settings. This paper provides a review of existing graph kernels, their applications, software plus data resources, and an empirical comparison of state-of-the-art graph kernels. Get access to full paper: Graph Kernels: State-of-the-Art and Future Challenges
Other Great AI Papers
A personalized open-domain conversational bot. Meet Audrey
Self-supervised Learning of LiDAR Odometry for Robotic Applications. Click to read 
Modeling Trust in Human-Robot Interaction: A Survey
Reinforcement Learning with Videos: Combining Offline Observations with Interaction
Resources
This is why we need to talk about responsible AI. Read more 
AI job trends important to watch in 2021. Read here 
The most popular ML and Deep Learning Courses. Read more 
Online GESTURE controlling Car Game in Python finally done.
Job Opportunity
We are immediately hiring a strong Data Scientist / Machine Learning Engineer to join our growing team. Apply Now
Top AI News
How Amazon retail systems run Machine Learning predictions with Apache Spark. Read More 
Google launches new Artificial Intelligence tools for healthcare. Click to Read
AI Scholar Weekly
If this newsletter lights you up, create a ripple effect by sharing it with someone else, and they will also be lit up!
I value your comments and shares and would love to connect on TwitterLinkedIn, and Facebook. And, if you have suggestions or other thoughts, I would love to hear from you, email me at chris@educateai.org
Did you enjoy this issue?
Educate AI

AI Scholar Weekly brings you everything new and exciting in the world of Artificial Intelligence, Machine Learning, and Deep Learning every week for free.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue