An Overview on Explainable AI
Montag, 18. November 2019
As a trade-off to superior performance modern ML models are typically of black box type, i.e. it is not obvious to understand how they behave in different circumstances. This forms a natural barrier for their use in business as it requires blind trust in algorithmic performance which often directly links to the organization’s profit. For example Banking regulators or GDPR require models to be interpretable (contradicting to optimize predictive accuracy). An introduction to the rising field of explainable AI is given: Specific requirements on interpretability are worked out together with an overview on existing methodology such as e.g. variable importance, partial dependency, LIME or Shapley values as well as a demonstration of their implementation and usage in R.