Here we present the titles and abstracts for our keynotes at the upcoming SAS25 conference.
The keynotes are public (Location: Rühle-Saal @hlrs). Please write to nico.formanek@hlrs.de if you plan to attend.
Monday July 28th (11am) – Ben Recht
A Bureaucratic Theory of Statistics
This talk proposes a framework for understanding statistics as a calculus of bureaucracy, providing numerical foundations for governance in the face of uncertainty through clear, transparent rules. I will first outline how statistical prediction can leverage collected data to make decisions that comply with bureaucratic rules and standards. I will lean on the 70-year legacy of Paul Meehl’s 1954 monograph Clinical versus Statistical Prediction, which provided early evidence of the superiority of statistical methods for predicting the consequences of actions and motivated decades of research proving this case. The talk will examine Meehl’s argument in the context of contemporary machine learning systems and provide theoretical arguments for why and when statistical methods outperform human judgment.
I will then highlight the complementary bureaucratic role of randomized experiments, especially in medicine, which serve to inform decision-making by generating data. I also describe how randomized trials establish “ex ante policies”—statistical rules and procedures designed before data collection. I will conclude by suggesting new directions for research in policy-oriented statistical methodology.
Tuesday July 29th (10am) – Marcus Düwell
Uncertainty as a Necessary Condition of the Human Lifeform
Humans live under conditions that are not fully under their control. This has, however, ambivalent implications. On the one hand, this results in attempts to gain control over the conditions of our life by creating institutions and developing technologies, by exercising foresight and precaution. On the other hand, it is the lack of determination in human thinking and behavior that forms a necessary condition for morality, creativity and curiosity. That means, attempts to abandon uncertainty would form an attack on the conditio humana. At the same time, we need to deal with uncertainty in ways that make a meaningful life possible in which long-term projects can be pursued. This is particularly challenging under the conditions of developed technological societies. The ecological side-effects are primarily destructive in a longterm perspective. This enforces longterm foresight and plannability of human life, such as through models that can be used to forecast future developments. This enforced longtermism is not only challenging for all types of policymaking, but it forms also a possible threat for human freedom and creativity.
The presentation will investigate the different ways in which uncertainty effects the conditio humana and will discuss possible attitudes towards uncertainties like precaution. It will focus on the demanding hermeneutic challenges related to this dimension, the normative importance to protect the conditions of human agency and likewise the importance to live and act towards the horizon of an open future.
Wednesday July 30th (10am) – John Symons
Artificial Intelligence and the Evolution of Scientific Justification in Software-Intensive Science
This talk revisits Symons and Horner’s “Software Intensive Science” (2014) where we discuss the epistemological challenges posed by high conditionality in scientific software. This time we apply that work to the burgeoning role of Artificial Intelligence (AI) in scientific inquiry. We argued that the intractability of characterizing error distributions in software exceeding approximately 300 lines of code creates an epistemic distinction between non-software-intensive and software-intensive science. This talk contends that AI, with its inherently probabilistic nature, opaque “black-box” mechanisms, and emergent behaviors, further magnifies these uncertainties. We argue that traditional philosophical models of scientific justification, rooted in deterministic, deductive, and step-by-step articulation, are ill-suited to the complexities of AI-driven scientific processes. While AI may not fit neatly into established frameworks demanding certainty and fully characterized error distributions, its profound utility in handling vast datasets, accelerating discovery, modeling complex systems, and generating novel hypotheses remains undeniable. In this talk I argue for an epistemology of science that embraces probabilistic reasoning and empirical effectiveness, while simultaneously developing new frameworks for validation, trustworthiness, and domain specificity to navigate the inherent uncertainties introduced by AI in scientific discovery.