How to Remove Bias and Make Machine Learning Models Fair & Discrimination-Free at the Example of Credit Risk Data

Time:

12:51 am

Summary:

There have been multiple instances when a machine learning model was found to discriminate against a particular section of society, be it rejecting female candidates during hiring, systemically disapproving loans to working women, or having a high rejection rate for darker color candidates. Recently, it was found that facial recognition algorithms that are available as open-source have lower accuracy on female faces with darker skin color than vice versa. In another instance, research by CMU showed how a Google ad showed an ad for high-income jobs to men more often than women. Using credit risk data where Publicis Sapient wanted to predict the probability of someone defaulting on a loan, they were able to shortlist features that were discriminatory in nature.

Ready to attend?

Register now! Join your peers.

Register nowView agenda
Newsletter

Knowledge is everything!
Sign up for our newsletter to receive:

  • an extra 10% off your ticket!
  • insights, interviews, tips, news, and much more about Predictive Analytics World
  • price break reminders