View profile

AI Scholar Weekly - Issue #48

AI Scholar Weekly - Issue #48
By Educate AI  • Issue #48 • View online
A Secure TensorFlow Framework; Trading Robots; The Top 3 Effective Feature Selection Strategies; State of the Art Machine Learning Applications for COVID-19; Top 6 Cheat Sheets Novice Machine Learning Engineers Need (and more)

Top AI Research This Week!
#1 Simple yet Effective Way for Improving the Performance of GAN
Generative adversarial networks (GANs) based on deep convolutional neural networks (CNNs) have shown considerable success to capture complex and high-dimensional image data, and been utilized in numerous applications including image-to-image translation, image in-painting, and text-to-image translation.
However, despite the recent advances, the training of GAN is known to be unstable and sensitive to the choices of hyper-parameters.
To address this problem, some researchers have proposed new generator and discriminator structures in a novel approach. The suggested method is a straightforward method for improving the performance of GAN.
By using the non-overlapped features obtained via the proposed CR module, the discriminator effectively guides the generator during the training procedure, which results in enhancing the ability of the generator. One of the main advantages of the CR module is that it can be readily integrated with the existing discriminator architectures. Moreover, results experiments show that, without imposing the training overhead, the discriminator with the CR module significantly improves the performance of the baseline models. In addition, the generalization ability of the proposed method is demonstrated by applying the CR module to high-resolution images. It is expected that the proposed method will be applicable to various applications based on GAN. Read more: Simple yet Effective Way for Improving the Performance of GAN
#2 New Research Confirms that Using Transformers to Train Scientific Classifiers Generally Results in Greater Accuracies
Researchers with Research Lab recently carried out experiments that confirm that using transformers to train scientific classifiers generally results in greater accuracies compared to linear classifiers that were until now regarded as strong baselines. They also observed that fine-tuning pre-trained transformers on domain-specific corpora make an impactful contribution.
Encouraged by the success of recent developments in natural language processing and understanding, where pre-trained transformer language models dominate the state of the art, they focused on BERT and its different flavors specialized in the scientific domain: BioBERT and SciBERT. In an effort to shed light on the matter, the researchers focus on analyzing the self-attention mechanism inherent to the transformer architecture. 
Their findings show that the last layer of BERT attends to words that are semantically relevant for the scientific fields associated with each publication. This observation suggests that self-attention actually performs some type of feature selection for the fine-tuned model. Read more: Is Self-Attention a Feature Selection Method?
#3 Introducing a Secure TensorFlow Framework
The cloud system infrastructure services (IaaS) spending is projected to increase 26.9 percent to $65.3 billion and it’s clear to see why. Data-driven applications are on the spike. However, the cloud computing infrastructure has significant security challenges to deal with since it hosts applications that rely on algorithms with large datasets that may contain private and sensitive information.
To this end, researchers have now released secureTF, a distributed secure machine learning framework based on Tensorflow for the untrusted cloud infrastructure. SecureTF is a generic platform to support unmodified TensorFlow applications, while providing end-to-end security for the input data, ML model, and application code.
Moreover, it supports both training and classification phases while providing all three important design properties for the secure machine learning workflow: transparency, accuracy, and performance.
This recent paper reports on secureTF system design choices and the system deployment in production use-cases. It is a promising approach that incurs reasonable performance overheads, especially in the classification/inference process, while providing strong security properties against a powerful adversary. Read more: secureTF: A Secure TensorFlow Framework
#4 Trading Robots: A Framework for Building Autonomous Traders
In this paper, Autonomous Computational Systems Lab proposes a framework for building and testing trading robots called mt5b3. This framework allows the development of autonomous traders on python language and access markets through MetaTrader 5 platform. It is also freely available and can be used in the real or simulated operation in the financial market accessible through the MetaTrader platform.
The researchers also discuss some open problems in the area such as accountability and trustworthiness in autonomous systems and recognized there is still a long road ahead in the path to building autonomous traders that can beat the best human experts consistently. The proposed framework mt5b3 may also contribute to the development of new autonomous traders.
You may perform historical simulations with your trader robots in order to evaluate their performance. Furthermore, it is possible to use the same trader with minor changes in real operations in any market accessible through the Metatrader5 accounting system. Click to Read: A Framework for Building Autonomous Traders
#5 Open-Domain Conversational Search Assistant with Transformers
In this paper, researchers investigate how transformer architectures can address different tasks in open-domain conversational search, with particular emphasis on the search-answer generation task. Here’s what they found out that;
  • Transformers-based Conversational Search. Transformers can solve a number of tasks in conversational search, leading to new state-of-the-art results by outperforming the best TREC-CAsT 2019 baseline by 3.9% in terms of nDCG@3
  • Search-Answer Generation. Experiments showed that search systems can be improved with agents that abstract the information contained in multiple documents to provide a single and informative search answer. In terms of ROUGE-L we concluded that all answer generation models performed better than the retrieval baseline.
  • Abstractive vs Extractive Answer Generation. The examined answer generation Transformers revealed different behaviors. BART was the most effective in generating answers that were rewritten with information from different passages. This approach turned out to be better than extractive methods that copy and paste sentences from different passages.
Other Great AI Research Papers
Classical detection and Tracking-any-object task. 1st Place Solution to ECCV-TAO-2020
Can trust in AI become warranted? This new study provides several working definitions for trust-related terms and answers the questions of what is necessary to allow trust to occur in AI, and what is necessary for the goal of trust to be achieved. Read more
Data privacy is one of the most prominent concerns in the digital era. After several data breaches and privacy scandals, users are now worried about sharing their data. In the last decade, Federated Learning has emerged as a new privacy-preserving distributed machine learning paradigm with FedeRank available here:
The COVID-19 pandemic has galvanized the machine learning community to create new solutions that can help in the fight against the virus. this research presents the latest advances in ML research applied to COVID-19 covering forecasting, medical diagnostics, drug development, and contact tracing. Machine Learning Applications for COVID-19: A State-of-the-Art Review
This new study presents a survey on the state-of-the-art Deep Learning solutions for Smart Grids and Demand Response systems. Particularly, researchers focus on reviewing four important themes such as electric load forecasting, state estimation, energy theft detection, energy sharing, and trading. DL for Intelligent Demand Response and Smart Grids
AI Resources
Top 6 Cheat Sheets Novice Machine Learning Engineers Need. Click here
The Top 3 Effective Feature Selection Strategies. Click here
Top AI News
Artificial intelligence researchers rank the top A.I. labs worldwide. Read story
They look so friendly, as they roll along your shopping mall. Cute Robot Cops
AI Scholar Weekly
Thanks for reading! Create a ripple effect by sharing it with someone else, and they will also be lit up! And, if you have suggestions, comments, or other thoughts, we would love to hear from you, email me at, tweet at @cdossman, like us on Facebook, or connect with me on LinkedIn
Did you enjoy this issue?
Educate AI

AI Scholar Weekly brings you everything new and exciting in the world of Artificial Intelligence, Machine Learning, and Deep Learning every week for free.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue