ROI

FOSS Machine Learning News week 17-2020

Welcome to our biweekly selection of Free and Open machine learning news. Created using our own opinionated selection and summary algorithm. FOSS machine learning is crucial for everyone. Machine Learning is a complex technology. So keep it simple.

1 Faster and Cheaper PyTorch with RaySGD

Distributing your deep learning model training has become a question of when you do it, not if. State-of-the-art ML models like BERT have 100s of millions of parameters, and training these large networks will take you days if not weeks on one machine. Ray and RAYSGD are great open building blocks. In the Free and Open Machine Learning you can find more information on Ray.

(RaySGD)

2 Reducing the carbon footprint of artificial intelligence

This relies on a “progressive shrinking” algorithm that efficiently trains the OFA network to support all of the subnetworks simultaneously. But training the OFA and searching it ends up being far more efficient than spending hours training each neural network per platform. MIT researchers have developed a new automated AI system for training and running certain neural networks. OFA decouples model training and architecture search, and spreads the one-time training cost across many inference hardware platforms and resource constraints. Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues.

(MIT Reseach CS)

3 Chip Design with Deep Reinforcement Learning

A fast, high-quality, automatic chip placement method could greatly accelerate chip design and enable co- optimization with earlier stages of the chip design process. In “Chip Placement with Deep Reinforcement Learning”, we pose chip placement as a reinforcement learning (RL) problem, where we train an agent (i.e, an RL policy) to optimize the quality of chip placements. Determining the layout of a chip block, a process called chip floorplanning, is one of the most complex and time-consuming stages of the chip design process and involves placing the netlist onto a chip canvas (a 2D grid), such that power, performance, and area (PPA) are minimized, while adhering to constraints on density and routing congestion. In particular, as we train over a greater number of chip blocks, our method becomes better at rapidly generating optimized placements for previously unseen chip blocks. Although we evaluate primarily on accelerator chips, our proposed method is broadly applicable to any chip placement problem.

(Google AI Blog)

4 Automatically Detecting Technical Debt Discussions with Machine Learning

We have recently started seeing developers explicitly use the phrase “technical debt” or similar terms such as “design debt” or “architectural smells.” Application of machine learning to locate technical debt issues can improve our understanding of TD and help develop practices to manage it. In this blog post, which is based on an SEI white paper, we describe the results of a study in which machine learning was used to quantify the prevalence of TD-related issues in issue trackers.

(Software Engineering Institute)

5 Machine learning using intrinsic genomic signatures for rapid classification of novel pathogens: COVID-19 case study

For novel viral and pathogen genome sequences, this alignment-free whole-genome machine-learning approach can provide a reliable real-time option for taxonomic classification. These tools are used to analyze a large dataset of over 5000 unique viral genomic sequences, totalling 61.8 million bp, including the 29 COVID-19 virus sequences available on January 27, 2020. This paper identifies an intrinsic COVID-19 virus genomic signature and uses it together with a machine learning-based alignment-free approach for an ultra- fast, scalable, and highly accurate classification of whole COVID-19 virus genomes. The proposed method combines supervised machine learning with digital signal processing (MLDSP) for genome analyses, augmented by a decision tree approach to the machine learning component, and a Spearman’s rank correlation coefficient analysis for result validation.

(PLOS ONE)

6 The Cost of Training NLP Models: A Concise Overview

We review the cost of training large-scale language models, and the drivers of these costs. The intended audience includes engineers and scientists budgeting their model-training experiments, as well as non-practitioners trying to make sense of the economics of modern-day Natural Language Processing (NLP).

(arxiv.org)

7 New AI algorithm brings us closer than ever to controlling machines with our minds

Researchers from Carnegie Mellon and the University of Pittsburgh published research showing how they’d solved a frustrating problem for people who use a brain-computer interface (BCI) to control prosthetic devices with their thoughts.

(link)

8 Illustrated: Self-Attention

What do BERT, RoBERTa, ALBERT, SpanBERT, DistilBERT, SesameBERT, SemBERT, MobileBERT, TinyBERT and CamemBERT all have in common? Clear explanation of self-attention. And you can play with it in this notebook directy (hosted on Google Colab).

(link)

The FOSS Machine Learning News Blog is a brief overview of open machine learning news from all over the world. Free and Open machine learning means that everyone must be able to develop, test and play and deploy machine learning solutions. Read and share the FOSS ML Guide! And remember:You are invited to join the Free and Open Machine Learning open collaboration project.