This page contains informative tutorials from academics, practitioners, and policy experts on a range of topics related to algorithmic fairness, accountability, and transparency. Some of these tutorials were presented at FAT* conferences. Note that none of these whitepapers are peer-reviewed; instead, they are presented here as a resource to the community.

Tools

Post-Training Evaluation with Binder February 2, 2018

Jessica Forde, Chris Holdgraf, Yuvi Panda, Aaron Culich, Matthias Bussonnier, Min Ragan-Kelley, M Pacer, Carol Willing, Tim Head, Fernando Perez, Brian Granger, and the Project Jupyter Contributors

"Black box" models are increasingly prevalent in our world and have important societal impacts, but are often difficult to scrutinize or evaluate for bias. Binder is a software tool that provides anyone in the opportunity to examine a machine learning pipeline, promoting fairness, accountability, and transparency. Binder is used to create custom computing environments that can be shared and used by many remote users, enabling the user to build and register a Docker image from a repository and connect with JupyterHub. JupyterHub, repo2docker, and JupyterLab work together on Binder to allow a user to evaluate a machine learning pipeline with much greater transparency than a typical publication or GitHub page.

De-biasing Classifiers with Themis-ml February 2, 2018

Niels Bantilan

Decision support systems (DSS) are information systems that help people make decisions in a particular context like medical diagnosis, loan-granting, and hiring. As machine learning (ML) is integrated into these systems, we need better tools to measure and mitigate discriminatory patterns in both training data and the predictions made by ML models. This tutorial introduces themis-ml, an open source Python library for measuring and reducing potential discrimination (PD) in machine learning systems.

Practical Techniques for Interpreting Machine Learning Models: Introductory Open Source Examples Using Python, H2O, and XGBoost February 2, 2018

Patrick Hall, Navdeep Gill, and Mark Chan

This series of Jupyter notebooks uses open source tools such as Python, H2O, XGBoost, GraphViz, Pandas, and NumPy to outline practical explanatory techniques for machine learning models and results. The notebooks cover the following modeling and explanatory techniques, along with practical variants and concise visualizations thereof:

  • Monotonically constrained GBMs, partial dependence, and ICE
  • Decision tree surrogate models, variable importance, and LOCO local feature importance
  • LIME
  • Sensitivity Analysis

Optimized Pre-Processing for Discrimination Prevention February 2, 2018

Flavio P. Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney

This document presents the findings obtained using a data pre-processing algorithm for discrimination prevention we have recently developed. We also discuss the salient aspects of its software implementation. Our optimization method transforms the probability distribution of the input dataset into an output probability distribution subject to three objectives and constraints: (i) group discrimination control, (ii) individual distortion control, and (iii) utility preservation.

Policy and Practice

How Data Scientists Help Regulators and Banks Ensure Fairness when Implementing Machine Learning and Artificial Intelligence Models February 2, 2018

Nicholas Schmidt, Bernard Siskin, and Syeed Mansur

Nearly all major lending institutions are taking steps to implement machine learning and artificial intelligence models to better measure marketing response rates and creditworthiness, assess the likelihood of fraud, and quantify other risks and opportunities. However, implementation of machine learning has proved difficult because banks are highly regulated and face substantial challenges in complying with model governance standards and anti-discrimination laws. A distinct challenge has been that the guidelines for compliance have not kept up with the breakneck speed of technological change in the ML/AI universe. This paper provides a high-level overview of these issues, and also outlines how we advise banks to comply with fair lending requirements when implementing machine learning. We describe the current focus of regulators, explain how discrimination is typically measured, and relate how we see fair lending compliance being implemented in the coming years.

A framework for addressing fairness in consequential machine learning February 4, 2018

Chuck Howell

There is a case to be made that society is in the early phase of another Industrial Revolution, driven by the rapid advancement of AI capabilities, with at least as significant an impact as the first Industrial Revolution and at an accelerated pace of adoption. However, inability to establish justified confidence in consequential AI systems will inhibit their adoption (wasting opportunities to tackle major problems) and/or result in adoption of systems with major latent flaws and potentially serious consequences. At MITRE, we are exploring how concepts from the systems safety community can be adapted to support the calibration, mitigation, and informed acceptance of fairness risks in consequential ML systems.