How to Remove Bias and Make Machine Learning Models Fair & Discrimination-Free at the Example of Credit Risk Data

Time:

08:18

Summary:

There have been multiple instances when a machine learning model was found to discriminate against a particular section of society, be it rejecting female candidates during hiring, systemically disapproving loans to working women, or having a high rejection rate for darker color candidates. Recently, it was found that facial recognition algorithms that are available as open-source have lower accuracy on female faces with darker skin color than vice versa. In another instance, research by CMU showed how a Google ad showed an ad for high-income jobs to men more often than women. Using credit risk data where Publicis Sapient wanted to predict the probability of someone defaulting on a loan, they were able to shortlist features that were discriminatory in nature.

Bereit zur Teilnahme?

Jetzt anmelden! Schließen Sie sich Ihren Kollegen an.

Jetzt anmeldenProgramm ansehen
Newsletter

Fundiertes Wissen ist die Basis für alles!

Melden Sie sich für den Newsletter an und erhalten Sie:

  • 10% Rabatt auf Ihr Ticket
  • Einblicke, Interviews, Tipps, Neuigkeiten und vieles mehr
  • Erinnerungen an Preissenkungen