[The IPKat] Fordham 33 (Report 7): IP and Frontier Technologies

33 views
Skip to first unread message

Annsley Merelle Ward

unread,
Apr 27, 2026, 8:55:15 AM (11 days ago) Apr 27
to IPKat Readers

The penultimate panel of Fordham 33 
of course majored on AI
The IPKat was back in Fordham, with the Spring skies and the chirping birds gracing the streets of the Upper West Side. As a proud and long-standing partner with the Fordham IP Conference, now in its 33rd year, the IPKat is thrilled to partner with the students of Fordham for reports from this years’ conference. This report comes from Tara Amine (Former EUIPO Observatory Trainee, and current LL.M in Intellectual Property and Information Technology at Fordham Law School).

Over to Tara:    

"Friday’s session on IP and Frontier Technologies brought together one of the conference's most wide-ranging panels, chaired by Joshua Simmons of Kirkland and Ellis. Simmons set the agenda early: where is technology actually happening, how do laws differ across jurisdictions, and is there any meaningful harmonization in sight? With contributions from Robert Arcamona of OpenAIMatthias Leistner of Ludwig Maximilian University MunichJudge Pauline Newman of the Federal CircuitSimon Chesterman of the National University of SingaporePeter Yu of Texas A&M University School of Law, and joining mid-session, Judge Pierre Leval of the Second Circuit, there was no shortage of perspectives.

The panel opened on a point that would have been surprising even a few years ago: for companies like OpenAI, copyright law has overtaken patent law as the primary regulatory concern. Navigating different copyright regimes across jurisdictions now occupies significant time and resources for AI companies, and once the debate moved from algorithms to training data, copyright was always going to come to the fore.

The panel offered a clean taxonomy of the three broad regulatory models currently in play. The US approach is free market and light-touch, partly ideological, partly a function of the difficulty of passing legislation at all given the state of American politics. The EU approach is rights-based and has attracted the label of over-regulatory, though the detailed account of how that framework is actually functioning suggested the label may be flattering it. The Chinese model, until recently focused primarily on state sovereignty and data localization, has become a more interesting source of consumer-facing regulation than many expected, including requirements that large language models be truthful and in alignment with state-defined values.

On the EU specifically, there is a fact that tends to get lost in the debate: there is relatively little large-scale AI training actually happening in Europe, with most activity being customization and fine-tuning of models trained elsewhere. This creates an immediate territoriality problem, as training activities abroad are assessed under the law of the place of training, not the place of deployment. The DSM Directive's opt-out based training exception, which permits training unless rights holders have opted out, is not working in practice. There is no technical standard for the opt-out, no clarity on who is authorized to invoke it, with one court holding that even individual users could do so, and its efficacy is questionable even in sectors like music where it might be expected to function.

Article 53 of the AI Act, which requires providers of general-purpose AI models placed on the EU market to put in place a policy to comply with EU copyright law and respect opt-outs, is broadly drafted and, as the panel argued, does little to resolve the underlying territoriality gap. The result is an increasing tendency in European courts to focus on the output rather than the training, and on cases of memorization and regurgitation, where the question of whether copies exist in the model itself becomes live. The Munich court in GEMA v. OpenAI found that there are copies in the model; the English High Court in Getty v. Stability AI found the opposite, based on three independent expert reports. The CJEU now has the question before it. On where EU policy is likely to land, the view expressed was that the Commission will ultimately focus on the output, and that what will likely emerge is some form of collective licensing or statutory remuneration scheme, possibly compulsory, on the basis that what AI models extract is fundamentally collective creative heritage rather than discrete infringements, and that compulsory licensing may, for the first time in copyright history, be the first-best rather than second-best solution.

The question of how much judges need to understand the technology they are adjudicating generated the session's liveliest exchange, and the panel did not agree. The argument for granular technological understanding is that AI systems are not one monolithic device but are built very differently depending on their purpose, and legal analysis that treats them as interchangeable will produce poor outcomes. The counterargument, put with equal force, is that policy makers generally want technology-neutral principles, and that judges, whatever they are told, tend to reach for their own metaphors.

Judge Leval, who joined the panel mid-session having initially sat in the audience, was sceptical that technological understanding is either realistic or particularly necessary for judges. His view, grounded in his own experience with Authors Guild v. Google, is that fair use analysis turns on the nature and effect of the use, not its mechanics. On the recent AI cases, he expressed reservations about the reasoning in the Anthropic decision, arguing that the distinction drawn between training on pirated works and the unauthorized digitization in Google Books, which also involved copying without a license but from legitimate libraries, should not bear the legal weight placed on it. His broader concern was that courts cannot make rulings that effectively render AI illegal, since AI exists by copying, while equally acknowledging that some form of legislative intervention is now urgently needed. Without it, the economics of publishing factual works collapse: the day after publication, an AI can produce a cheaper substitute containing every fact but none of the protected expression, and no publisher will be able to justify the investment.

Two themes cut across the whole session and were flagged as areas deserving more academic attention than they currently receive. The first is conflict of laws and extraterritoriality: the territoriality assumption baked into international copyright conventions is being fundamentally challenged by technology that trains in one jurisdiction and deploys globally, and this will be the defining issue at conferences like this one in the years ahead. The second, raised from the audience, was the antitrust dimension: if compliance with AI regulation becomes sufficiently expensive, only the largest incumbents will be able to afford it, and the competitive market downstream, which depends on robust IP rights as its foundation, may never materialize."

Reply all
Reply to author
Forward
0 new messages