Building accurate machine learning models is not always good enough, especially when predictions are used to make decisions that impact peoples lives. In addition, the fairness of a model becomes very important when decisions need be fully trusted.
This session will use practical examples to define and quantify the fairness of both data and models, exploring algorithms to detect bias and disparity in data, and mitigate bias in both data and models.
Follow along if you want to know more about the fairness concepts and how to explore bias in your own data and models. You will learn about bias definitions and algorithms and how to apply them by following along with a Python Jupyter notebook.
If you do not have a IBM Cloud account yet, start by following these instructions. Else go straight to this section to create a Watson Studio service.
To load the notebook for this workshop, select From URL, give the notebook a name, paste the below link in the Notebook URL field and then click the Create button at the bottom right. You can leave the runtime as the default.
https://raw.githubusercontent.com/MargrietGroenendijk/jupyter-notebooks/main/notebooks/dig-dev-conf-2020-beyond-accuracy.ipynb