Dear colleagues,
We would like to invite you to submit
abstracts to our panel “Outlasting ‘disruption’: Empirical perspectives on
practical reasoning with AI”, scheduled for the EASST 2026 conference in
Kraków (Poland), 8–11 September. The panel welcomes empirical investigations that shed light on the lived difficulties of working with automated systems and
contribute to ongoing debates on the future of work, human–machine
collaboration, and the societal implications of AI-based technologies. Details
of the panel’s scope and aims are attached and included below.
We look forward to receiving your
submissions.
Kind regards,
Dipanjan Saha (University of Liverpool)
Jakub Mlynář (HES-SO Valais-Wallis)
===
Outlasting ‘disruption’: Empirical
perspectives on practical reasoning with AI (panel P136)
The inflationary narratives surrounding the
achievements of the ‘new’ Artificial Intelligence (AI), often propagated by the
corporations developing them, hinder our understanding of the true potentials
of these technologies. To get a better grasp on their rapid advancements and
their future consequences for various forms of work, we need to understand how
they are made relevant to their specific contexts of use. While often presented
as seamless or autonomous tools, these technologies are frequently messy, unpredictable,
and prone to generating outputs that users find ambiguous, problematic, or
simply incorrect. This creates a critical gap between the AI's prescribed
operation and the practical, situated work required to make it useful.
Attending to the broad sphere of activities that goes on to make AI work provides
a more measured and empirically grounded basis for evaluating their
achievements and limitations in the world.
This panel invites empirical investigations
that allow us to uncover the lived difficulties of working with these automated
systems. We welcome contributions exploring, but not limited to, the following
questions:
How do a wide range of users,
from domain experts to laypersons, actually manage and make sense of the results
produced by these technologies in practice?
What mundane methods and
practical reasoning skills do people employ to evaluate, trust, or challenge AI’s
outputs?
How can we empirically study
the multiplicity of reasoning styles and ad-hoc procedures users adopt when
evaluating AI-generated results?
What does attending to these
practical difficulties reveal about the actual, rather than promised,
capabilities of automation and the necessity of situated human skill?