conferences | speakers | series

Thou Shall Judge But With Fairness: Methods to Ensure an Unbiased Model

home

Thou Shall Judge But With Fairness: Methods to Ensure an Unbiased Model
PyCon DE & PyData Berlin 2023

Is your model prejudicial? Is your model deviating from the predictions it ought to have made? Has your model misunderstood the concept? In the world of artificial intelligence and machine learning, the word "fairness" is particularly common. It is described as having the quality of being impartial or fair. Fairness in ML is essential for contemporary businesses. It helps build consumer confidence and demonstrates to customers that their issues are important. Additionally, it aids in ensuring adherence to guidelines established by authorities. So guaranteeing that the idea of responsible AI is upheld. In this talk, let's explore how certain sensitive features are influencing the model and introducing bias into it. We'll also look at how we can make it better.

We cannot escape thinking about fairness through numbers and math. Models are not fair simply because they are mathematical, contrary to popular belief. AI systems are subjected to bias. It may be inherent which is due to historical bias in the training dataset. There may be label bias that occurs when the set of labeled data is not a full representation of the entire universe of existing potential labels. Another potential bias is sampling bias, which occurs when certain people in the intended universe have a higher or lower sampling probability than others. Models learn from such biased datasets which may lead to unfair decisions. As cascading models are developed, this bias continues to spread. Model fairness is an alerting concern. Unfair AI systems can create habitual losses for businesses. It may also contribute unfavorable commercial values to the company, creating situations like customer eroding, slandering, and decreasing transparency. As a result, Model fairness is becoming increasingly necessary. In the proposed talk, I would gently introduce you to the above concepts and some open source libraries that would help us in accessing ML models' fairness. Lastly, I would be walking you through how to assess the fairness of a model for a law school dataset using Fairlearn, an open source library by Microsoft and the measures that can be taken to mitigate the same. My Talk will Focus On 1. What are the metrics that need to be considered for assessing the fairness of an ML model? 2. What are the mitigation measures that can be implemented for the same? 3. Python code to gauge the fairness of a model trained on a law school dataset using Fairlearn and steps to mitigate the model.

Speakers: Nandana Sreeraj