CAUTION: This email originated outside of the University. Do not click links unless you can verify the source of this email and know the content is safe. Check sender address, hover over URLs, and don't open suspicious email attachments. |
Submission deadline: 1 April 2026.
Call for papers:
The recent advances in AI research have progressively accelerated the possibility of automating decision-making in areas that, until recently, were considered an exclusive human prerogative.
The concept of judgment is invoked to back claims as to which decisions in areas such as law, medicine, military, public administration, and beyond, should or should not be automated or delegated to AI.
Striking cases of “automated discrimination” in decision-making on welfare benefits and pre-trial detention have provided further grounds for the
warning of AI critics against a shift “from judgment to calculation”. Many in the debate emphasise that making decisions, especially in sensitive areas, requires
rule application to be supplemented with human judgment and discretion. What judgment is capable of adding is precisely what is
not codifiable into rules, and thus what is unattainable by machines programmed to merely execute algorithms. On the other end of the spectrum, those who are more enthusiastic about AI often point out how the
unruly character of human judgment is a source of inconsistency, incoherence and noise. They emphasise how human judges are inherently biased and prone to arbitrary behaviour. As non-algorithmic rules only partially succeed in keeping judges in check,
algorithmic automation is presented as an enhanced remedy against the “flaws in human judgment”.
The concepts of judgment and rules have figured prominently in philosophy and jurisprudence. The current debate on Artificial Intelligence brings these two key concepts again to the centre of the legal and philosophical debate, raising fundamental and practical
questions as to the relationship between judgment, the application of rules, and the use of computer algorithms. It is noteworthy that, irrespective of the seeming irreconcilability of the claims currently advanced on human judgment and automation,
a set of common assumptions appears to be shared across most participants in the debate on automated decision-making, namely:
• that a close connection exists between the concepts of judgment and rules;
• that a crucial distinction is to be drawn between algorithmic and non-algorithmic rules;
• that algorithmic rules are capable of exhaustively determining their application in advance, while non-algorithmic rules require decision-makers to do something more than merely apply rules;
• that judgment performs a supplementary function by filling the gap between (nonalgorithmic) rules and their application.
This special issue aims to investigate how the concepts of judgment and rules play out within the philosophical and legal debate on automated decision-making. While the concepts of judgment and rules appear to fuel a conceptual clash in a debate increasingly
polarised between AI critics and enthusiasts, regulators erect as pillars of AI governance concepts that have a family resemblance with human judgment, i.e.,
meaningful human control, and human oversight. The challenges emerging in the ongoing implementation of the EU AI Act signal the urgency of both a clearer conceptual framework and practical insights informing the practices of development, deployment,
and assessment of automated decision-making systems.
This special issue aims to contribute to both the theoretical debates and AI governance by bringing together original conceptual, historical, and empirical research on the interplay between judgment, rules, and automation.
■ Conceptual research
The special issue welcomes contributions aimed to address the following questions:
- How are the concepts of judgment and rules mobilised in the current debate on the automation of decision-making?
- How do the positions in the current AI debates relate to broader families of research on judgment, in the Aristotelian, Kantian, hermeneutical, and utilitarian traditions, and behavioural sciences?
- How are ideas of codifiability of moral knowledge, particularism and anti-theory in moral philosophy applicable to debates about AI and human judgment?
- How is the relationship between judgment and rules affected by the turn from rules to rule following inaugurated by Wittgenstein and elaborated in the philosophy of the 20th century?
■ Historical research
Conceptual-historical research has underscored how the study of past traditions can offer precious contributions to the elaboration of the concept of judgment and its relationship with that of rules and decision-making (e.g., Albert R. Jonsen, Stephen Toulmin,
The Abuse of Casuistry. A History of Moral Reasoning, University of California Press 1989). This special issue invites research on traditions that offer conceptual and practical elaborations of the relationship between judgment and rules
and that can contribute to the understanding of the current challenges posed by the automation of decision-making. In particular, the special issue welcomes research on
traditions that offer insights on the relationship between abstract rules and particulars of each case; rules and rule formulations; the relationship between normative decision-making, training and practice.
■ Empirical research
The special issue also welcomes case studies documenting how the relationship between judgment and rules manifest within the practices of development and deployment of automated decision-making systems, particularly in the context of AI systems classified as
high-risk under Annex III of the AI Act (e.g., systems intended to be used in the context of decision-making on asylum, migration, welfare benefits, law enforcement, judicial application of the law, etc.). -
What are the conflicts and trade-offs between judgment and rules that emerge within the practices of design and use of automated systems aimed to support or replace human decision-making?
- How are such conflicts addressed and solved by the practitioners involved?
- How are the requirements of meaningful human control and human oversight operationalised in practice? What is their relationship to rules and judgment?