[PHILOS-L] Deadline approaching! Workshop "Ignorance, Opacity, and Dependence: Epistemic Challenges for Scientists in the Age of AI"

0 views
Skip to first unread message

Maria Martinez-Ordaz

unread,
Mar 17, 2026, 2:16:48 PM (11 hours ago) Mar 17
to PHIL...@listserv.liv.ac.uk

CAUTION: This email originated outside of the University. Do not click links unless you can verify the source of this email and know the content is safe. Check sender address, hover over URLs, and don't open suspicious email attachments.

 
CfA Workshop "Ignorance, Opacity, and Dependence: Epistemic Challenges for Scientists in the Age of AI"

When: June 1–2, 2026
Where: Institute of Philosophy of the Czech Academy of Sciences, Prague, Czech Republic

Deadline for submissions: March 25, 2026

Keynote Speakers
Selene Arfini (Pavia)
Juan Manuel Durán (Delft)
Otávio Bueno (Miami)

This workshop aims to explore how AI reshapes the scientists' epistemic environment. What kinds of ignorance does AI mitigate, and what new forms does it generate? How should we conceptualize epistemic dependence on complex computational systems? Can scientific understanding survive—or even thrive—under conditions of opacity?

We welcome contributions from philosophy of science, epistemology, philosophy of AI, logic, and related disciplines addressing (but not limited to) the following questions:
  • Does reliance on AI systems undermine or transform scientific understanding?
  • What is the epistemic status of results produced by opaque or non-interpretable models?
  • How should we conceptualize epistemic dependence in AI-mediated research?
  • Are new norms of justification required in computationally intensive sciences?
  • How should we understand epistemic trust in the context of proprietary (usually corporate-controlled) AI systems?
  • How does AI-mediated research reshape the division of epistemic labor within scientific communities? 
  • Does the use of AI systems shift the epistemic agency from individual scientists to distributed (human-AI) collectives? 
  • Can ignorance generated by AI systems be epistemically productive?
  • How should responsibility and accountability be distributed in AI-supported research?
  • What role should transparency, interpretability, and explainability play in scientific practice?
  • Do AI-driven methods challenge traditional distinctions between data, models, and theories?

Submission Guidelines
Please send an abstract (150–200 words) prepared for blind review to:
ignoranceopac...@gmail.com

Contributed talks will be 30 minutes long, followed by 20 minutes of discussion.

Submission deadline: March 25, 2026
Notification of acceptance: March 30, 2026

--
María del Rosario Martínez-Ordaz

Philos-L "The Liverpool List" is run by the Department of Philosophy, University of Liverpool https://www.liverpool.ac.uk/philosophy/philos-l/ Messages to the list are archived at http://listserv.liv.ac.uk/archives/philos-l.html. Recent posts can also be read in a Facebook group: https://www.facebook.com/PhilosL/ Follow the list on Twitter @PhilosL. Follow the Department of Philosophy @LiverpoolPhilos To sign off the list send a blank message to philos-l-unsub...@liverpool.ac.uk.

Reply all
Reply to author
Forward
0 new messages