With increasing automation of processes in almost all areas of life, intelligent systems, often as expert systems, are increasingly used to support (man-made) decisions.
Explainability and transparency as core requirements for the use of intelligent systems
Numerous expert groups have formulated transparency and explainability of intelligent systems as a core requirement for their use. This is based on the premise that a system can only be trusted if its decisions are comprehensible ("explainable").
Legal implementation of the explainability
So far, it is unclear how the general demand for transparency and explainability can be implemented, particularly in legal and technical terms. Explainability is not an established legal concept; legal requirements for the justification and explanation of decisions are found in the applicable law in very different forms. The project "EIS" investigates whether and how a holistic concept for ensuring the explainability of machine-supported decisions could look like or whether the requirements for explainability still require a context-dependent concretisation.
These questions will be dealt with by an interdisciplinary team at Saarland University, consisting of scientists Prfrom (theoretical and practical) philosophy, computer science, psychology and legal informatics. The Volkswagen Foundation has granted funding for a period of nine months for the preparation of a long-term application for a corresponding research project.
Project participants |
Prof. Dr. Holger Hermanns (Informatics) Prof. Dr. Cornelius König (Psychology) |
Project representatives |
Research associate Dr. Andreas Sesing (This email address is being protected from spambots. You need JavaScript enabled to view it.) |
Project duration |
March 2019 – November 2019 |
Further information |