Those who make momentous decisions must also be able to explain them. While learning theory provides good reasons for how learning systems work in principle, the theoretical basis for understanding individual decisions made by the systems is often lacking: Why did the driving assistant brake in this exact situation? Why does the system suggest immunotherapy rather than chemotherapy for this cancer patient? Why does a learning system report a serious IT security incident based on seemingly innocuous event data? With our focus in XAI, we are laying these theoretical foundations. In this way, we are building bridges from computer science and mathematics to law, politics, economics, medicine, and the natural sciences, where XAI contributes to the explanation and theory building of complex phenomena, in addition to predicting them. |  |