Zu Hauptinhalt springen

White-Box AI

“White-Box AI” – Transparent decision support through interpretable machine learning models

Development and evaluation of interpretable model structures with the incorporation of expert knowledge

Background
Complex machine learning (ML) methods such as deep neural networks and boosted decision trees often suffer from the problem that although they can achieve high prediction accuracy due to their flexibility, the behavior and the generated results are hardly understandable. Thus, it is not obvious how input variables are related to each other in order to achieve a final forecast or prediction by an ML model. On the one hand, this affects the trust in AI applications and, on the other hand, it makes it difficult for analysts and developers to modify or improve such models in order to meet business requirements ( particularly in critical scenarios).

Objective
This research project targets the development of interpretable ML models. The interpretability of a model can be specifically influenced by different model restrictions, such as linearity, additivity and monotonicity, without necessarily suffering losses in predictive accuracy. These model restrictions also make it possible to obtain so-called shape functions for individual input variables, which show how the individual input variables contribute to the prediction of the ML model. Shape functions can be easily traced and provide the user with insights into how the model works. Conversely, such model constraints allow expert knowledge to be explicitly incorporated into the operation of an ML model. Against this background, this research project addresses the central question to what extent interpretable ML models can be improved by the incorporation of expert knowledge and to what extent the user’s interpretation is supported by this.

People

Lead: Mathias Kraus and Patrick Zschech (University of Leipzig),
Nico Hambauer
Sven Kruschel
Julian Rosenberger
Lasse Bohlen


  1. Fakultät für Informatik und Data Science

Kontakt

LEHRSTUHL FÜR NACHVOLLZIEHBARE KÜNSTLICHE INTELLIGENZ IN DER BETRIEBLICHEN WERTSCHÖPFUNG

Prof. Dr. Mathias Kraus

Sekretariat

Kornela Bauer

sekretariat.kraus@informatik.uni-regensburg.de

Bajuwarenstraße 4, Raum 534