In recognition of the material, physical, and psychological harms arising from the growing use of automated monitoring and decision-making systems for labor control, jurisdictions around the world are considering new digital-rights protections for workers. Unsurprisingly, legislatures frequently turn to the European Union (EU) for inspiration. The EU, through the passage of the General Data Protection Regulation in 2016, the Artificial Intelligence Act in 2024, and the Platform Work Directive in 2024, has positioned itself as the leader in digital rights, and, in particular, in providing affirmative digital rights for workers whose labor is mediated by “a platform.” However, little is known about the efficacy of these laws.
This talk begins to fill this knowledge gap. Through close analyses of the laws and successful strategic litigation by platform workers under these laws, I argue that the current EU framework contains two significant shortcomings. First, the laws primarily position workers as liberal, autonomous subjects, and in doing so, they make a category error: workers, unlike consumers, are subordinated by law and doctrine to the firms for which they labor. As a result, the liberal rights that these laws privilege—such as transparency and consent—are insufficient to mitigate the material harms produced through automated labor management. Second, this talk argues that by leaning primarily on transparency principles to detect, prevent, and stop violations of labor and employment law, EU data laws do not account for the ways in which workplace algorithmic management systems often create new harms that existing laws of work do not address. These harms, which fundamentally disrupt norms about worker pay, evaluation, and termination, arise from the relational logic of data-processing systems—that is, the way that these systems evaluate workers by dynamically comparing them to others, rather than by evaluating them objectively based on fulfillment of ascribed duties.
Based on these analyses, I propose that future data laws should be modeled on older approaches to workplace regulation: rather than merely seeking to elucidate or assess problematic data processes, they should aim to restrict these processes. The normative north star of these laws should be proscribing the digital practices that cause the harms, rather than merely shining a light on their existence.
Cami Koepke, PhD
Philosophy Program Analyst and Undergraduate Advisor | UC San Diego
Philosophy Department Room 0478 | Ridgewalk Academic Complex
Advising Appointments: Undergraduate Advising Appointments