Causal machine learning is increasingly emerging as a key pillar for the next generation of scientific research. While traditional machine-learning methods excel at recognizing patterns and correlations, they often struggle to answer questions about why phenomena occur or how targeted interventions may change outcomes. By integrating causal reasoning with advances in representation learning and foundation models, new opportunities arise to develop systems that go beyond prediction—supporting scientific understanding, reliable decision-making, and actionable insights.
Against this backdrop, the event brought together researchers working on different yet complementary perspectives within causal machine learning. The aim was to identify synergies across projects and disciplines, exchange best practices for the design, training, and evaluation of causal models, and outline strategic research directions for the future. In particular, discussions focused on how foundation models could be extended with causal principles to ensure interpretability, generalization, and fairness.

For the participating research groups, this focus is particularly important. In many scientific applications, it is not sufficient for models to merely generate predictions—they must also provide explanations and guide effective interventions. Developing a shared vision for causal machine learning helps advance trustworthy AI systems for science while fostering collaboration and knowledge exchange between teams.
A particular strength of the event was the convergence of two distinct research traditions. The group led by Stefan Bauer focuses on causal discovery and representation learning in biology, aiming to uncover underlying mechanisms and abstractions. In contrast, the group led by Stefan Feuerriegel concentrates on treatment effect estimation and decision support in medicine, addressing questions related to interventions, outcomes, and clinical practice. These perspectives represent two methodological traditions—causal discovery and causal inference—that have often been studied separately.

With the emergence of powerful foundation models, there is now a unique opportunity to bring these approaches closer together. The discussions during the event highlighted the significant potential of such integration: future systems could simultaneously provide explanations, generate predictions, and inform decisions. In doing so, they would contribute to a new generation of trustworthy AI systems for scientific research.
Join the conversation