Welcome to our biweekly selection of Free and Open machine learning news. Created using our own opinionated selection and summary algorithm. FOSS machine learning is crucial for everyone. Machine Learning is a complex technology. So keep it simple.
1 How Close Are Computers to Automating Mathematical Reasoning?
But these two strategies, even when combined (as is the case with newer theorem provers), don’t add up to automated reasoning. Interactive theorem provers, or ITPs, act as proof assistants that can verify the accuracy of an argument and check existing proofs for errors. Computerized theorem provers can be broken down into two categories. And in June, a group at Google Research led by Christian Szegedy posted recent results from efforts to harness the strengths of natural language processing to make computer proofs more human-seeming in structure and explanation. Some mathematicians see theorem provers as a potentially game-changing tool for training undergraduates in proof writing.
2 Who thought it was a good idea to have facial recognition software?
What have we gained as a nation if Microsoft, Google, and Amazon continue to use biased and inaccurate facial recognition software? For the commercial use of facial recognition technology data, subject consent seems to be the main mechanism adopted to manage the new technology. It’s about time someone asked that question about facial recognition software. It is time to apply it to the specific case of facial recognition technology. IBM had initially gathered the photos from Flickr and painstakingly labeled them to enable facial recognition developers to improve the fairness and accuracy of their programs.
3 Matt Botvinick on the spontaneous emergence of learning algorithms
From Botvinick’s description, it sounds to me like he thinks learning algorithms that find/instantiate other learning algorithms] is a strong attractor in the space of possible learning algorithms: “…it’s something that just happens. In this interview, he discusses results from a 2018 paper which describe conditions under which reinforcement learning algorithms will spontaneously give rise to separate full-fledged reinforcement learning algorithms that differ from the original. In short, they think that part of the dopamine system (DA) is a full-fledged reinforcement learning algorithm, which trains/gives rise to another full-fledged, free-standing reinforcement learning algorithm in PFC, in basically the same way (and for the same reason) the RL-trained RNNs spawned separate learning algorithms in their experiments.
4 Fairness and machine learning
Great new (draft) textbook. This book gives a perspective on machine learning that treats fairness as a central concern rather than an afterthought. Machine learning has made rapid headway into socio-technical systems ranging from video surveillance to automated resume screening. Simultaneously, there has been heightened public concern about the impact of digital technology on society.
GPT is hot. A trend for years to come. This FOSS work is interesting to try and play around with GPT. It is a minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training.
6 Gradio: Generate an easy-to-use UI for your ML model,
Quickly create customizable UI components around your TensorFlow or PyTorch models, or even arbitrary Python functions. Mix and match components to support any combination of inputs and outputs. Gradio makes it easy for you to “play around” with your model in your browser by dragging-and-dropping in your own images (or pasting your own text, recording your own voice, etc.) and seeing what the model outputs.
7 PHOTONAI — A Python API for Rapid Machine Learning Model Development
PHOTONAI is a high-level Python API designed to simplify and accelerate machine learning model development. It offers a unified framework to access existing machine learning implementations and build custom machine learning pipelines. Importantly, it is open to user-designed algorithms in every part of the model construction and evaluation process. In addition, it extends existing solutions with novel pipeline construction functionality, automates the repetitive training, hyperparameter optimization and evaluation workflow according to user’s choices and provides a convenient visualization tool for rapid model peformance assessment. Source code is available on Github, while examples and documentation can be found at photon-ai.com.
8 Bandit Data-driven Optimization: AI for Social Good and Beyond
The use of machine learning (ML) systems in real-world applications entails more than just a prediction algorithm. AI for social good applications, and many real-world ML tasks in general, feature an iterative process which joins prediction, optimization, and data acquisition happen in a loop. We introduce bandit data-driven optimization, the first iterative prediction-prescription framework to formally analyze this practical routine. Bandit data-driven optimization combines the advantages of online bandit learning and offline predictive analytics in an integrated framework. It offers a flexible setup to reason about unmodeled policy objectives and unforeseen consequences. We propose PROOF, the first algorithm for this framework and show that it achieves no-regret. Using numerical simulations, we show that PROOF achieves superior performance over existing baseline.
The FOSS Machine Learning News Blog is a brief overview of open machine learning news from all over the world. Free and Open machine learning means that everyone must be able to develop, test and play and deploy machine learning solutions. Read and share the FOSS ML Guide! And remember:You are invited to join the Free and Open Machine Learning open collaboration project.