Fair and Explainable AI
Material for a 3 hour hands-on workshop using Jupyter notebooks, Python, AIF360, AIX360 with Watson Studio, Watson Machine Learning and Watson OpenScale
Last updated
Material for a 3 hour hands-on workshop using Jupyter notebooks, Python, AIF360, AIX360 with Watson Studio, Watson Machine Learning and Watson OpenScale
Last updated
This 3 hour workshop will start with an introduction on the tools and data science techniques specific to fair and explainable AI. We will discuss how to remove unfair bias in machine learning, how to explain machine learning models, and move on to monitoring performance, bias and drift. Throughout we will walk through practical examples, with access to Jupyter notebooks leveraging the IBM Cloud infrastructure.
Start with setting up your IBM Cloud account to get ready to follow along with the workshop here
Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale and many algorithms are now being reexamined due to illegal bias. How do you remove bias & discrimination in the machine learning pipeline? In this workshop you will learn about de-biasing techniques that can be implemented by using the open source toolkit AI Fairness 360. AI Fairness 360 (AIF360) is an extensible, open source toolkit for measuring, understanding, and removing AI bias that brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers. In part 1 of this workshop we will go through how to measure bias in data and models, how to apply the fairness algorithms to reduce bias and how to apply a practical use case of bias measurement & mitigation.
In many applications, trust in an AI system will come from its ability to ‘explain itself.’ But when it comes to understanding and explaining the inner workings of an algorithm, one size does not fit all. Different stakeholders require explanations for different purposes and objectives, and explanations must be tailored to their needs. While a regulator will aim to understand the system as a whole and probe into its logic, consumers affected by a specific decision will be interested only in factors impacting their case – for example, in a loan processing application, they will expect an explanation for why the request was denied and want to understand what changes could lead to approval.
AI Explainability 360 (AIX360) is an open source toolkit that includes algorithms that span the different dimensions of ways of explaining along with proxy explainability metrics. In part 2 we will explore this toolkit and ways to explain model predictions.
In the last part of the workshop the open-source tools are combined into one pipeline. Model building in Watson Studio, as well as deployment of the model as a REST API to Watson Machine Learning. Finally, Watson OpenScale will track and measure the model, to help ensure it remains fair, explainable and compliant. We will go through the steps of building a custom model serving engine, access this model using a REST API and log the payload for the model using Watson OpenScale. We will walk through the process of deploying a credit risk model and then monitoring the model to explore the different aspects of trusted AI.
08:45 am - 09:00 am Enrolment & Setup
09:00 am - 09:10 am Introductory remarks
09:10 am - 09:30 am Fair and Explainable AI
09:30 am - 10:15 am Remove Unfair Bias in Machine Learning
10:15 am - 10:30 am Break
10:30 am - 11:05 am Explain Machine Learning Models
11:05 am - 11:40 am Build a machine learning model and monitor the performance, bias and drift
11:40 am - 11:50 am Summary & Next Steps including Q&A
11:50 am - 12:00 pm Closing remarks