Sprecher*innen:

Sray Agarwal

How to Remove Bias and Make Machine Learning Models Fair & Discrimination-Free at the Example of Credit Risk Data

Datum:

Montag, 16. November 2020

Zeit:

14:10

Summary:

There have been multiple instances when a machine learning model was found to discriminate against a particular section of society, be it rejecting female candidates during hiring, systemically disapproving loans to working women, or having a high rejection rate for darker color candidates. Recently, it was found that facial recognition algorithms that are available as open-source have lower accuracy on female faces with darker skin color than vice versa. In another instance, research by CMU showed how a Google ad showed an ad for high-income jobs to men more often than women. Using credit risk data where we wanted to predict the probability of someone defaulting on a loan, we were able to shortlist features that were discriminatory in nature.

Bereit zur Teilnahme?

Jetzt anmelden! Schließen Sie sich Ihren Kollegen an.

Register nowView Agenda
Newsletter Fundiertes Wissen ist die Basis für alles! Melden Sie sich für den Newsletter an und erhalten Sie:
  • 10% Rabatt auf Ihr erstes Ticket
  • Einblicke, Interviews, Tipps, Neuigkeiten und vieles mehr
  • Erinnerungen an Preissenkungen