[PHILOS-L] CFP: AI as a colleague? Towards a social epistemology of Artificial Intelligence

3 views
Skip to first unread message

Claus Beisbart

unread,
Oct 7, 2025, 4:51:21 PM (11 hours ago) Oct 7
to PHIL...@listserv.liv.ac.uk

cfp: ai as a colleague? towards a social epistemology of artificial intelligence

Date: 26.-27.03.2026

Location: University of Bern

Organizers: Claus Beisbart (University of Bern), Andreas Wolkenstein (LMU Munich)

Artificial intelligence (AI) is currently all the rage in most sciences and medical practice. Deep neural networks and other AI systems are successfully utilized to diagnose diseases from medical data, make predictions, and support treatment planning. Most of its tasks are epistemic, i.e., AI is used to gain knowledge or support inquiry and decision-making. As AI applications learn on the spot and become increasingly autonomous, it seems appropriate to consider them as epistemic agents.

This opens up a new perspective on AI applications: They become partners with which humans can interact in similar ways as with other humans. The idea that our inquiries are social – that they involve several humans and take place in specific “information ecosystems” – is crucial for social epistemology, a young and thriving subfield of epistemology. It is thus no surprise that scholars have begun to assess the use of AI in social epistemological terms.

This workshop brings together philosophers from the philosophy of AI and social epistemology to explore the current state of thinking about AI as a social, epistemic agent. The aim of the workshop is to find out how AI applications and their work can be understood using concepts and insights from social epistemology. A special focus is on applications from medicine.

Questions to be addressed include, but are not limited to:
  • Are current AI applications really epistemic agents or rather mere tools?
  • Which epistemic roles can current AI tools take in inquiry?
  • Which insights and concepts from social epistemology can be transferred to the use of AI tools in medicine?
  • Can, and should, some AI tools be treated as epistemic authorities?
  • Which collaborative settings are recommendable for AI use in medicine?
  • What consequences does the social epistemology of AI tools have for the demands of explainability?

We invite scholars interested in contributing their research to this workshop to submit an extended abstract of 500-1000 words and a short abstract of about 150 words. Please send your abstracts in a pdf-file suitable for anonymous peer review to andreas.w...@med.uni-muenchen.de until 01.12.2025 at the latest. Decisions will be communicated until 16.12.2025. Funding is available to cover travel and accommodation costs for those whose abstract is accepted.

The workshop provides room for ample discussion of the presenters’ work, the individual presentations are set to 75 minutes, including 30 minutes discussion.

The workshop will start on 26.03.2025 in the morning and end on 27.03.2026, late afternoon. Invited speakers who have confirmed their participation include Thomas Grundmann, Rico Hauswald, Inkeri Koskinen, Federica Malfatti, and Andreas Wolkenstein.

For inquiries please contact the organizers (claus.b...@unibe.ch and/or andreas.w...@med.uni-muenchen.de).

--
Prof. Dr. Dr. Claus Beisbart
Universität Bern
Institut für Philosophie
Länggassstrasse 49a, CH-3012 Bern
Claus.B...@unibe.ch

Philos-L "The Liverpool List" is run by the Department of Philosophy, University of Liverpool https://www.liverpool.ac.uk/philosophy/philos-l/ Messages to the list are archived at http://listserv.liv.ac.uk/archives/philos-l.html. Recent posts can also be read in a Facebook group: https://www.facebook.com/PhilosL/ Follow the list on Twitter @PhilosL. Follow the Department of Philosophy @LiverpoolPhilos To sign off the list send a blank message to philos-l-unsub...@liverpool.ac.uk.

Reply all
Reply to author
Forward
0 new messages