The IPKat has received and is pleased to host this guest contribution from Katfriend Dinusha Mendis (Centre for Intellectual Property Policy & Management (CIPPM) Bournemouth University, UK) who has authored this four-part blog series together with Rossana Ducato (University of Aberdeen, UK) and Tatsuhiro Ueno (Waseda University, Japan). In Part 4 of this four-part series, the authors provide an insight into the impact of synthetic media and the response from the perspective of the policy and civil society representatives. Part 4 concludes the series with thoughts for the future whilst outlining the way forward. The findings are drawn from two stakeholder roundtables hosted in Japan and UK and funded by the Daiwa Anglo-Japanese Foundation. For an overview of the use, impact and adoption of deepfake technology and how it is being tackled in UK and EU, see Part 1 of this series. Part 2 provided a comparative view of the law and policy perspectives from Japan, South Korea, and China, whilst Part 3 provided an insight into the impact of synthetic media in the creative and technology sectors, in particular the film and music industries before considering responses from AI developers and online platforms.
Deepfake technology and the law: Perspectives from the policy and civil society sectors (Part 4)
by Dinusha Mendis, Rossana Ducato, and Tatsuhiro Ueno
Perspectives from the policy and civil society sectors
Jones pointed to the Consultation on
AI and Copyright [IPKat here] which the UKIPO published in December 2024, seeking views on AI and copyright, including ‘digital replicas’. As well as recognising the concerns of creators, she also spoke about the importance of considering the evidence of less prominent creators – relevant in the case of digital replicas and AI more broadly.
At the time of writing this blogpost, the UK is awaiting the government response to the Consultation, which closed in February 2025, having received over 11,500 responses. With reference to this, Jones reiterated the importance for the UKIPO to prioritise evidence-based policymaking. Given the complexity of the issue, Jones anticipated that the government response to the consultation will take several months.
In relation to Japan, Kurokawa outlined the efforts of Japanese government (Intellectual Property Headquarters) which initiated a report in 2023 to address Generative AI and IP, including voice protection. The latter was the trigger that led to the creation of a study group on “Protection of Publicity Value” under the Unfair Competition Prevention Act, organised by the Intellectual Property Policy Office. The aim of the study group was to review existing provisions (e.g., inducing confusion, abusing prominent labels, inducing misidentification, damaging reputation) and their application to deepfake. At the time the roundtable was hosted, Kurokawa stated that the outcome of the study group would be available by around March 2025 (however, earlier this year, as a result of the review, Japan decided not to progress with revising the Unfair Competition Prevention Act). In looking ahead to the future, Kurokawa explained that Japan is considering labelling requirements for AI-generated content, with an interim report expected soon to guide Japan's future AI policies.
Mariano Delli Santi drew historical parallels with encryption, cybercrime and privacy and cautioned against any knee-jerk reaction. He pointed to attempts by the UK Government to control technology through broad prohibitions, which often failed to deter bad actors and inadvertently hindered legitimate development. As such, Delli Santi cautioned against criminalising all forms of synthetic media, providing counterarguments of their purpose within society, such as the need for satirical deepfakes, and emphasising the need to protect free speech. Additionally, he highlighted the danger of automated content removal, citing YouTube as an example. He mentioned how the takedown of YouTube videos containing copyrighted music ultimately led to the takedown of protest videos. Such removal, as he argued, may be used to censor users and free speech in general. Therefore, with this in mind, Mariano Delli Santi stressed the importance of human rights and the rights of individuals when considering policy initiatives and legislative reforms relating to deepfakes.
Moving forward
Anna Hovsepyan (PhD student at King’s College London) provided a roadmap for the future and highlighted the unique impact of deepfakes due to their volume, velocity, sophistication, lack of required skills and accessibility which distinguish them from traditional image manipulation. She identified several pressing challenges that, in her opinion, needs regulatory intervention, such as in the issue of personality rights after death: the use of “grief bots”, a digital clone of a deceased individual used to help those coping with their loss, pose fundamental questions about personal identifiers after death and how these must be addressed.
 |
| Deep and certainly not fake sleep |
That is why solutions like a ban of deepfakes will not be desirable or workable. Equally, omnibus ‘deepfake acts’ will be “a waste of time”, as they risk resulting in formulations too broad (and unenforceable) or too limited. A sectoral approach, tackling the different
dimensions of the deepfake life cycle, could offer a more tailored and promising way forward. She also suggested exploring constitutional protection for personal identifiers and stressed the importance of provenance, detection tools (which need constant updating) and public awareness.
Her final remarks about the need of an interdisciplinary approach to address the challenges of deepfake echoes the sentiment emerging from all the participants to the roundtable: the law can play an important role, but it is only a part of the picture. We should consider how technological 'remedies’ (e.g. labelling, detection, removal) or enablers (i.e. creative applications) interact with social norms (i.e. awareness and critical appraisal of content, fanbase and relationship with the artists), law (e.g. effects on privacy, freedom of expression, personality rights) and market forces (the power of platforms, providers of AI systems to generate deepfake, and rightsholders).