Conference on Computer Intensive Methods
Recent debates on AI have pointed out trust as a seminal issue. However, it is often unclear what is meant by that: How do we understand trust in AI models? What is the underlying concept of trust? The philosophy of trust has developed fundamentally different notions of trust, which can describe fundamentally different relationships to a person, an institution or even artifacts. Although terminology sometimes tends to obscure categorical differences, two major forms of theory can be distinguished here: reliabilist approaches, which are based on epistemic reasons, and trust in the narrower sense, which is a normative ground. In the philosophy of computational sciences and models, however, this difference has hardly been noted.
Therefore, this year’s SAS conference focuses on this alternative: relying on or trusting AI and simulation models?
At first glance, it seems natural to understand trust in purely epistemic terms. An epistemic interpretation of trust exists in the concept of reliabilism – and this seems ideally suited for application to scientific-technical contexts. It is then concerned with the reliability of methods and techniques. In AI contexts, reliability is often quantified by the system itself (indicating the confidence in the classification). Historical-theoretical approaches also argue in this direction. Naomi Oreskes, for example, appears to provide reliabilistic arguments to answer the question of why science should be trusted, citing evidence and “reliable knowledge” as reasons.
However, the question remains as to how evidence can be established and who can judge the reliability of knowledge. It seems that a great deal of expertise is required to – reliably?! – determine the reliability of complex scientific methods and technologies. For example, while an individual can easily determine the reliability of a toaster, the same cannot be said for AI and simulation models or other complex technologies, as they do not necessarily provide immediate impressions of their performance.
In this context, normative theories of trust have argued that evidence will only be accepted if the source (in this case: experts!) is trusted. Following this line of argumentation, reliability seems to presuppose trustworthiness. While this argument is compelling, it raises questions about how we can evaluate the trustworthiness of experts or other sources of evidence. Thus, it appears that there is an entangled relationship between reliability and trustworthiness that requires further examination.
For the SAS23 conference we are inviting contributions which deal with the topics (across various disciplines) of reliability and trustworthiness of computing methods broadly understood. We are especially interested in contributions dealing with:
- the (dis)compatibility and entanglement of trustworthiness and reliability
- the differences of implicit and explicit trust relations and their operationalizations
- distributed trust(worthiness) and entanglement effects of trust in multi-agent networks
- the subtle impacts of technology on trust and expertise
- the question if technological processes or artifacts can be trusted or only relied on
- an elaboration of trust establishing practices
- the dispensability of trust by habituation to new technologies
- the moral/societal/political dimensions of trustworthy computing methods
- the relation between non-epistemic values and reliability/trustworthiness of computing methods
Accepted papers can be published in the upcoming SAS23 Edition with Springer.
The conference will take place from November 30th to December 1st at HLRS in Stuttgart. The conference fee is 100€.
We ask for an abstract of no more than 500 words to be submitted to phil@hlrs.de no later than July 1st 2023. Decisions will be announced July 15th.