What’s next?

02/03 -
02/06

AI in a Fragmented World: Navigating Trade-offs Across Disciplines and Cultures

As AI systems increasingly permeate critical domains, their design and governance require balancing competing priorities across cultural and disciplinary boundaries. Regulatory efforts such as the EU AI Act highlight the challenge of aligning universal principles with diverse local values and legal systems. 

At the same time, trade-offs between AI autonomy and human agency or innovation and fundamental rights complicate both technical development and policy formulation. This interdisciplinary and intercultural workshop brings together experts from philosophy, law, computer science, and information systems across Europe, Australia, Asia, North America, and beyond. It explores key trade-offs in AI design and governance to identify shared commitments and practical guardrails that can guide researchers, developers, and policymakers. By fostering dialogue across fragmented cultural and disciplinary contexts, the workshop aims to generate actionable insights for developing AI systems that are not only innovative and effective, but also ethically grounded and culturally sensitive.

Workshop Background and Goals

There is a growing awareness among AI researchers and policymakers that technological design cannot be divorced from its broader social, cultural, and political context (Mittelstadt et al., 2016; Crawford, 2021). However, both AI research and governance remain fragmented along disciplinary and cultural lines. Legal scholars often lack insight into the technical constraints of real-world AI development, while engineers may struggle to translate ethical principles into actionable design requirements (Morley et al., 2020; Selbst et al., 2019). Philosophers and ethicists may propose normative ideals that, while conceptually rigorous, fail to resonate with the pragmatic concerns of policymakers or industry stakeholders (Binns, 2018).

Beyond disciplinary divides, global AI development is shaped by diverse cultural and political frameworks. Collective versus individual concerns are weighed differently across societies (Hofstede, 2001); conceptions of fairness and trust are culturally situated (Shneiderman, 2022; Jobin et al., 2019); and the prioritization of values such as innovation, accountability, or fundamental rights often reflects a nation’s political and economic system (Greene et al., 2019; Dignum, 2019). As global power structures evolve, the strategic role of AI is also becoming a factor in geopolitics.

Hence, the goal of this workshop is to bring together scholars from different disciplines (philosophy, law, computer science, information systems, and related fields) and cultural backgrounds (Europe, Australia, Asia, North America, and beyond) to engage in an intensive, structured exchange about the future of AI design and governance. The workshop is designed to enable joint reflection on the current state of the field and the challenges and trade-offs that lie ahead. By fostering interdisciplinary and intercultural dialogue, we aim to articulate shared insights, identify areas of friction, and develop a research agenda that is attentive to global diversity while being grounded in practical concerns.

We are very pleased to welcome this international group at #Hightechabbey and are looking forward to the results of their meeting!

Please note: This is just an information regarding events taking place at SSC; public attendance is therefore not possible due to the character of the retreat.

Share this article on your social media:

Most read articles

What's new

Pulse

of Speinshart

02/09 - 02/10

 INverse Simulation in Science and Technology 

Please fill in the following field(s)

fill

Thank you for your message!

A copy of your message has been sent to your e-mail address. We will get back to you about your request!