What happened?

Causal Machine Learning as a Foundation for the Next Generation of Scientific Discovery

Causal machine learning is increasingly emerging as a key pillar for the next generation of scientific research. While traditional machine-learning methods excel at recognizing patterns and correlations, they often struggle to answer questions about why phenomena occur or how targeted interventions may change outcomes. By integrating causal reasoning with advances in representation learning and foundation models, new opportunities arise to develop systems that go beyond prediction—supporting scientific understanding, reliable decision-making, and actionable insights.

Against this backdrop, the event brought together researchers working on different yet complementary perspectives within causal machine learning. The aim was to identify synergies across projects and disciplines, exchange best practices for the design, training, and evaluation of causal models, and outline strategic research directions for the future. In particular, discussions focused on how foundation models could be extended with causal principles to ensure interpretability, generalization, and fairness.

For the participating research groups, this focus is particularly important. In many scientific applications, it is not sufficient for models to merely generate predictions—they must also provide explanations and guide effective interventions. Developing a shared vision for causal machine learning helps advance trustworthy AI systems for science while fostering collaboration and knowledge exchange between teams.

A particular strength of the event was the convergence of two distinct research traditions. The group led by Stefan Bauer focuses on causal discovery and representation learning in biology, aiming to uncover underlying mechanisms and abstractions. In contrast, the group led by Stefan Feuerriegel concentrates on treatment effect estimation and decision support in medicine, addressing questions related to interventions, outcomes, and clinical practice. These perspectives represent two methodological traditions—causal discovery and causal inference—that have often been studied separately.

With the emergence of powerful foundation models, there is now a unique opportunity to bring these approaches closer together. The discussions during the event highlighted the significant potential of such integration: future systems could simultaneously provide explanations, generate predictions, and inform decisions. In doing so, they would contribute to a new generation of trustworthy AI systems for scientific research.

Join the conversation

What are your thoughts? Are humans still the driving force behind innovation, or are we being overtaken by our own creations?

Share this article on your social media:

The author

Dr Adrian Rossner, born in 1991, is a historian and Anglicist with a special focus on structural change and industrialisation. After studying English and History at the University of Bayreuth, he completed his PhD at the Franconian Regional History Research Institute, focusing on economic and social developments in the Münchberg region during the peak of industrialisation.

Following several years as a research associate in teacher education and historical research, he took on the role of Project Coordinator for the Scientific Centre at Speinshart Monastery in 2023. Since November 2024, he has been the CEO of Speinshart Scientific Center for AI and SuperTech, where he is leading the development and strategic direction of this unique research institution in Germany.

Most read articles

What's new

Pulse

of Speinshart

03/19 - 03/20

FORAnGen: Bavarian Research Consortium for the Design of Sustainable Products Using Generative Design

Bitte füllen Sie das/die folgende(n) Feld(er) aus

ausfüllen

Vielen Dank für Ihre Nachricht!

Eine Kopie Ihrer Nachricht wurde an Ihre E-Mail-Adresse gesendet. Wir melden uns in Kürze bezüglich Ihrer Anfrage!