ROI

FOSS Machine Learning News week 11-2020

Welcome to our biweekly selection of Free and Open machine learning news. Created using our own opinionated selection and summary algorithm. FOSS machine learning is crucial for everyone. Machine Learning is a complex technology. So keep it simple.

1 Towards Interactive Weak Supervision with FlyingSquid

FlyingSquid is a new framework for automatically building models from multiple noisy label sources. Users write functions that generate noisy labels for data, and FlyingSquid uses the agreements and disagreements between them to learn a label model of how accurate the labeling functions are. The label model can be used directly for downstream applications, or it can be used to train a powerful end model.

(FlyingSquid)

2 Toward Human-Centered Design for ML Frameworks

Specifically, what do they struggle with when trying modern ML frameworks, and what do they want these frameworks to provide? To help narrow this broad space of decision possibilities, ML frameworks could embed tips on best practices directly into the programming workflow. Ironically, despite the abundance of non-experts tinkering with ML frameworks nowadays, many felt that ML frameworks were intended for specialists with advanced training in linear algebra and calculus, and thus not meant for general software developers or product managers. These tips could serve as an intermediate scaffolding layer that helps demystify the math theory underlying ML into developer-friendly terms. Support for Learning-by- Doing Finally, even though ML frameworks are not traditional learning platforms, software developers are indeed treating them as lightweight vehicles for learning-by-doing. Thus, the provision of pre-made ML models should also be coupled with explicit support for modification. Synthesize ML Best Practices into Just-in-Time Hints Developers also wished frameworks could provide ML best practices, i.e., practical tips and tricks that they could use when designing or debugging models.

(Google AI Blog)

3 Announcing TensorFlow Quantum: An Open Source Library for Quantum Machine Learning

Quantum data, which can be generated / simulated on quantum processors / sensors / networks include the simulation of chemicals and quantum matter, quantum control, quantum communication networks, quantum metrology, and much more. Because near-term quantum processors are still fairly small and noisy, quantum models cannot use quantum processors alone — NISQ processors will need to work in concert with classical processors to become effective. The tensor is executed by TensorFlow on the quantum computer to generate a quantum dataset. 2. Evaluate a quantum neural network model \- The researcher can prototype a quantum neural network using Cirq that they will later embed inside of a TensorFlow compute graph. Each quantum data tensor is specified as a quantum circuit written in Cirq that generates quantum data on the fly. However, to date there has been a lack of research tools to discover useful quantum ML models that can process quantum data and execute on quantum computers available today.

(Google AI Blog)

4 HBM2E Memory: A Perfect Fit For AI/ML Training

Memory bandwidth is one such critical area of focus enabling the continued growth of AI. Introduced in 2013, High Bandwidth Memory (HBM) is a high-performance 3D-stacked SDRAM architecture. The growth of AI/ML training capabilities requires sustained and across the board improvements in hardware and software to stay on the current pace. Designers can both realize the benefits of HBM2E memory and mitigate the implementation challenges through their choice of IP supplier. It provides excellent bandwidth and capacity capabilities: 410 GB/s of memory bandwidth with 24 GB of capacity for a single 12‑high HBM2E stack.

(Semiconductor Engineering)

5 Amazon Pinpoint added template personalization using Machine Learning

Today, Amazon Pinpoint launched a new feature that helps customers to personalize their email, SMS and push messaging templates using dynamic message variables. Customers can add dynamic message variables as placeholders within their templates and then populate them with content specific to each user. Content can come from either user attributes stored in Pinpoint or a machine learning model created with Amazon Personalize. Customers wanting to deliver relevant product recommendations or targeted marketing promotions can now use machine learning to select the right content for each of their users.

(Amazon Web Services)

6 Computer-aided diagnosis of external and middle ear conditions: A machine learning approach

The results show that this system might be used for general practitioners as a reference to make better decisions in the ear pathologies diagnosis. We also considered three machine learning algorithms: support vector machine (SVM), k-nearest neighbor (k-NN) and decision trees to develop the ear condition predictor model. Finally, we performed a classification stage –i.e., diagnosis– using testing data, where the SVM model achieved an average classification accuracy of 93.9%, average sensitivity of 87.8%, average specificity of 95.9%, and average positive predictive value of 87.7%. In particular, the lack of specialists in otolaryngology in third world countries forces patients to seek medical attention from general practitioners, whom might not have enough training and experience for making correct diagnosis in this field. To conduct the research, our database included 160 images as testing set and 720 images as training and validation sets of 180 patients.

(PLOS ONE)

7 Demystifying the world of deep networks

As co-author and MIT postdoc Andrzej Banburski explains, “Understanding convergence in deep networks shows that there are clear directions for improving our algorithms. This work suggests ways to improve deep networks, making them more accurate and faster to train.

The surprising fact is that no such explicit constraint seems to be needed in training deep networks. It is surprising, then, that in modern deep learning the practice is to have orders of magnitude more parameters than data. Despite this, deep networks show good predictive performance, and in fact do better the more parameters they have.

(MIT Reseach CS)

8 Summarizing and Searching Video with Machine Learning

The U.S. relies on surveillance video to determine when activities of interest occur in a location that is under surveillance. In this blog post, I describe our efforts to improve the training of machine- learning algorithms for monitoring real-time video and analyzing archived video. Moreover, video streams create mountains of data that currently require time- and resource-intensive human analysis before they can be used to advance the objectives of organizational missions. These algorithms are intended to detect objects, track those objects more effectively, and recognize patterns of objects and object interactions. Yet, because automated tools are not available to help analysts monitor real-time video or analyze archived video, analysts must dedicate full attention to video data streams to avoid missing important information about ongoing activities and patterns of life.

(Software Engineering Institute)

9 How Much Power Will AI Chips Use?

Ironically, it may be the software that has the most impact on power, and that has the fewest tools available for predicting system power consumption. AI and machine learning have voracious appetites when it comes to power. The post How Much Power Will AI Chips Use? appeared first on Semiconductor Engineering.

But as with cars, mileage varies greatly depending upon your driving habits. The whole idea behind AI is adaptation within an acceptable parameter of behaviors, but there isn’t much data to support this.

(Semiconductor Engineering)

The FOSS Machine Learning News Blog is a brief overview of open machine learning news from all over the world. Free and Open machine learning means that everyone must be able to develop, test and play and deploy machine learning solutions. Read and share the FOSS ML Guide! And remember:You are invited to join the Free and Open Machine Learning open collaboration project.