ROI

FOSS Machine Learning News week 31-2020

Welcome to our biweekly selection of Free and Open machine learning news. Created using our own opinionated selection and summary algorithm. FOSS machine learning is crucial for everyone. Machine Learning is a complex technology. So keep it simple.

1 Detect fraud faster using machine learning with Amazon Fraud Detector

Amazon Fraud Detector is a fully managed service that makes it easy to identify potentially fraudulent online activities, such as the creation of fake accounts or online payment fraud. Amazon Fraud Detector uses machine learning (ML) and 20 years of fraud detection expertise from AWS and Amazon.com to automatically identify potentially fraudulent activity so you can catch more online fraud faster. Great tool but unfortunately not open.

(Amazon Web Services)

2 Are we in an AI overhang?

An overhang is when you have had the ability to build transformative AI for quite some time, but you haven’t because no-one’s realised it’s possible.. While much hay has been made about how much more expensive it is than a typical AI research project, in the wider context of megacorp investment, its costs are insignificant. Transformative AI might enable $100tn- market-cap companies, or nation-states could pick up the torch.

(LessWrong)

3 Shortcuts: How Neural Networks Love to Cheat

Will Artificial Intelligence soon replace radiologists? The result described above is true, with one little twist: instead of using state-of-the-art artificial deep neural networks, researchers trained “natural” neural networks – more precisely, a flock of four pigeons – to diagnose breast cancer. In the end, neural networks perhaps aren’t that different from (lazy) humans after all … This neglect of understanding is part of the reason why shortcut learning has become such a widespread problem within deep learning.

(The Gradient)

4 Implementation of BERT

For building a BERT model basically first , we need to build an encoder ,then we simply going to stack them up in general BERT base model there are 12 layers in BERT large there are 24 layers .So architecture of BERT is taken from the Transformer architecture. Generally a Transformers have a number of encoder then a number of decoder but BERT only uses the encoder part of the Transformer in its architecture . With this article at OpenGenus, you must have a good understanding of implementing BERT.

(ML news Blog)

5 GPT-3: an AI game-changer or an environmental disaster?

You may have noticed some fuss on social media about something called GPT-3. The GPT bit stands for the “generative pre-training” of a language model that acquires knowledge of the world by “reading” enormous quantities of written text. The “3” indicates that this is the third generation of the system.

(Link)

6 The ‘Android of Self-Driving Cars’ Built a 100,000X Cheaper Way to Train AI

How do you beat Tesla, Google, Uber and the entire multi-trillion dollar automotive industry with massive brands like Toyota, General Motors, and Volkswagen to a full self-driving car? Just maybe, by finding a way to train your AI systems that is 100,000 times cheaper. It’s called Deep Teaching.

(Link)

7 Ciphey – Automated decryption tool using AI and NLP

Ciphey is an automated decryption tool. Input encrypted text, get the decrypted text back. Ciphey uses a custom built artifical intelligence module (AuSearch) with a Cipher Detection Interface to approximate what something is encrypted with. And then a custom-built, customisable natural languge processing Language Checker Interface, which can detect when the given text becomes plaintext.

(Link)

8 Learning from Data to Speed-up Sorted Table Search Procedures: Methodology and Practical Guidelines

Sorted Table Search Procedures are the quintessential query-answering tool, with widespread usage that now includes also Web Applications, e.g, Search Engines (Google Chrome) and ad Bidding Systems (AppNexus). Speeding them up, at very little cost in space, is still a quite significant achievement.

(Link)

9 Visual Analysis of Discrimination in Machine Learning

The growing use of automated decision-making in critical applications, such as crime prediction and college admission, has raised questions about fairness in machine learning. How can we decide whether different treatments are reasonable or discriminatory? A user study shows that users can interpret the visually encoded information in DiscriLens quickly and accurately. Use cases demonstrate that DiscriLens provides informative guidance in understanding and reducing algorithmic discrimination.

(Link)

10 Flower: A Friendly Federated Learning Research Framework

Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model, while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store the data in the cloud. However, FL is difficult to implement and deploy in practice, considering the heterogeneity in mobile devices, e.g., different programming languages, frameworks, and hardware accelerators. Although there are a few frameworks available to simulate FL algorithms (e.g., TensorFlow Federated), they do not support implementing FL workloads on mobile devices. Furthermore, these frameworks are designed to simulate FL in a server environment and hence do not allow experimentation in distributed mobile settings for a large number of clients. In this paper, we present Flower (https://flower.dev/), a FL framework which is both agnostic towards heterogeneous client environments and also scales to a large number of clients, including mobile and embedded devices.

(Link)

The FOSS Machine Learning News Blog is a brief overview of open machine learning news from all over the world. Free and Open machine learning means that everyone must be able to develop, test and play and deploy machine learning solutions. Read and share the FOSS ML Guide! And remember:You are invited to join the Free and Open Machine Learning open collaboration project.