What happened?

Explainable Intelligent Systems

We there very pleased to welcome our first guests at SSC: The team of EIS spent three days at Speinshart to focus on their research and finalize the corresponding report.

The Project Explainable Intelligent Systems investigates how explainability of artificial intelligent systems contributes to the fulfillment of important societal desiderata like responsible decision-making, the trustworthiness of AI, and many more.

Artificial intelligent systems increasingly augment or take over tasks previously performed by humans. This concerns both low-stakes tasks, such as recommending books or movies and high-stakes tasks, such as suggesting which applicant to give a job, what medical treatment to give a patient, or how to navigate autonomous cars through heavy traffic. In such situations, a variety of moral, legal, and general societal challenges are raised. To answer these challenges, it is often claimed, we need to ensure that artificially intelligent systems deliver reliable, trustworthy, and understandable explanations for their decisions. But how can this be achieved?

Explainable Intelligent Systems (EIS) is a highly interdisciplinary and innovative research project based at Saarland University, TU Dortmund University, and the University of Bayreuth, Germany. It brings together experts from informatics, law, philosophy and psychology. Together, the researchers investigate how intelligent systems can and should be designed in order to provide explainable recommendations and thus meet important societal desiderata.

Their research focuses on what society can appropriately desire from intelligent systems, how precisely understandability contributes to meeting the various challenges raised by the increasing use of AI in a modern society, and how explainability enables this much-needed understandability. Given that both the adequacy of a given explanation and its success in evoking understanding varies depending on a range of factors, they believe that the answers to these research questions will vary with the specific case or context under consideration. Therefore, a major goal of EIS is to develop a context-sensitive framework for explainability. In the near future, their framework shall help scientists, politicians, and stakeholders to guide research and policy-making regarding intelligent systems.

Join the conversation

What are your thoughts? Are humans still the driving force behind innovation, or are we being overtaken by our own creations?

Share this article on your social media:

The author

Dr Adrian Rossner, born in 1991, is a historian and Anglicist with a special focus on structural change and industrialisation. After studying English and History at the University of Bayreuth, he completed his PhD at the Franconian Regional History Research Institute, focusing on economic and social developments in the Münchberg region during the peak of industrialisation.

Following several years as a research associate in teacher education and historical research, he took on the role of Project Coordinator for the Scientific Centre at Speinshart Monastery in 2023. Since November 2024, he has been the CEO of Speinshart Scientific Center for AI and SuperTech, where he is leading the development and strategic direction of this unique research institution in Germany.

Most read articles

Keine kommenden Events gefunden.

Bitte füllen Sie das/die folgende(n) Feld(er) aus

ausfüllen

Vielen Dank für Ihre Nachricht!

Eine Kopie Ihrer Nachricht wurde an Ihre E-Mail-Adresse gesendet. Wir melden uns in Kürze bezüglich Ihrer Anfrage!