[The IPKat] [Guest post] Deepfake technology and the law: Perspectives from the UK and the EU (Part 1)

20 views
Skip to first unread message

Eleonora Rosati

unread,
Dec 26, 2025, 12:55:24 AM (6 days ago) Dec 26
to ipkat_...@googlegroups.com

[Guest post] Deepfake technology and the law: Perspectives from the UK and the EU (Part 1)

The IPKat has received and is pleased to host this 4-part guest contribution from Katfriends Dinusha Mendis (Centre for Intellectual Property Policy & Management (CIPPM), Bournemouth University, UK), Rossana Ducato (University of Aberdeen, UK), and Tatsuhiro Ueno (Waseda University, Japan). Part 1 sheds light on the use, impact and adoption of deepfakes and considers how it is being addressed in the UK and EU. Part 2 focuses on the Japanese, Korean and Chinese systems. Part 3 presents perspectives from the creative and the technological sectors. Part 4 concludes with views from policy and civil society representatives whilst outlining some thoughts for the future. The findings are drawn from two stakeholder roundtables hosted in Japan and UK as part of a collaborative project funded by the Daiwa Anglo-Japanese Foundation.

Deepfake technology and the law: UK and EU perspectives (Part 1)

by Dinusha Mendis, Rossana Ducato, and Tatsuhiro Ueno

Participants during the
stakeholder roundtables

AI-enabled deepfakes have saturated our online experience, to the point that when we look at a piece of news or video online, instead of asking whether it is true or not, we increasingly assume that it must be fake. The use of non-authorised deepfakes can bear serious consequences both for the society (in terms of collective harm, e.g. disinformation) and the person represented in the deepfake (as it is the case with intimate fabricated videos). At the same time, not all uses are necessarily problematic: the same technology can enable creative applications for teaching, research, artistic, satirical purposes.

How then should we regulate a phenomenon that provide such contrasting purposes? In responding to this question, this blog post will first consider the technical dimension, before outlining the UK and EU legal responses.

Deepfake technology and their impact

Deepfake is essentially an artificially generated synthetic form of media that replicates and manipulates the likeness of an individual. This can be in the form of audio, video, writing style – or all combined. Put simply, and as explained by Cagatay Yucel, the underlying technology of deepfakes is built upon a specific neural network which is further enhanced with the addition of, what is known as, Generative Adversarial Networks (GANs). GANs combine two forms of networks: the discriminator and generator. Both essentially help train one another, leading to realistic imagery which in turn makes it harder for users to determine whether what they are seeing is real or fabricated.

The technology itself is not entirely new. However, as Shigeo Morishima noted, Artificial Intelligence (AI) developments in the last five years have surpassed those of the preceding 20 years. As he explained, early developments in this area (around 2003), included real-time facial rendering, instantaneously integrating visitors’ faces into movies using 3D scanning and lip-syncing. Advancements in Graphic Processing Unit (GPU) resources, accelerated by a ‘deep learning boom’ led to a great leap in this technology around 2015. At present, modern capabilities, enhanced by apps, can perform face swaps into movies with lighting adjustments in just two minutes.

By their innate quality, synthetic media present a dual nature, offering both positives as well as negatives. This technology has been seen as an enormous benefit for individuals with disabilities. As Jennifer Williams explained, the technology has been used to aid those who have lost their ability to speak by facilitating voice reconstruction (for e.g. Val Kilmer’s voice after throat cancer), allowing users to talk more comfortably where they may not have previously been able to. In the entertainment industry, the company Metaphysic.ai, which was initially set up to create deepfakes of Tom Cruise for short social media videos, is now involved in producing major motion pictures.

Unfortunately, this is only part of the picture. Whilst voice reconstruction, or voice cloning, for individuals with disabilities is a positive, the very same technology has also been used to commit identity theft including defrauding banks and other individuals for transfers of large sums of money, replicating the voice of the victim almost to complete accuracy. These audio deepfakes can be especially problematic as they can easily be created from a soundbite of a person. Furthermore, when combined with visual deepfakes, this technology has also been utilised for the purposes of misinformation and political manipulation as evidenced by the Myanmar earthquake in late March 2025 and the ongoing Ukraine-Russia conflict. There can be serious impact on the rule of law and access to justice as well. For instance, there is a growing concern that such technology may be used to falsify evidence and, as Anna Hovsepyan illustrates, could be a “genuine issue” for family courts due to the potential harm caused by one misusing such technology against their former partner in custody battles.

However, probably the most pernicious application concerns non-consensual intimate deepfake. As Cagatay Yucel explains, a “staggering 93% of generated deepfakes are reportedly used for pornography, abuse material and non-consensual content” and is particularly prevalent against women. Yeyoung Chang expanded on this concern in relation to its impact in South Korea. She explained that the women targeted range from high-profile celebrities like female K-Pop stars to unassuming female students in schools. Disturbingly, such deepfakes have sometimes even been documented to have been created by minors. These instances have led to public outrage in Korea, drawing the attention of the national government to implement new legislation and policy to minimise the number of defamed individuals going forwards, considering the detrimental impact that such content has on the social and mental well-being of those affected.

UK

In the UK steps have been taken to actively tackle the concerning act of sharing sexually explicit images of videos (which can include deepfakes), introducing amendments to the Sexual Offences Act via the Online Safety Act 2023. In January 2025, the Government also announced the intention to introduce a new offence to criminalise the creation of sexually explicit deepfake, carrying up to a two-year prison sentence. Apart from this legislative novelty, the UK framework has not introduced any specific provision to regulate deepfake, for instance when it comes to the exploitation of personal attributes that can be done through this technology. The UK does not have a specific statutory right of publicity but, as Mendis and Ducato elaborated, UK relies on ‘a puzzle of different routes’ to countermeasure the impact of deepfake technology and does not benefit from ‘personality rights’ as in the case of some other European countries.

This piecemeal approach includes the laws of defamation, malicious falsehood, passing off, misuse of private information, data protection law, various IP rights and contract law. Some of these laws provides some recourse. For instance, in 2025, a lawyer was probed by the Solicitors Regulation Authority for sharing defamatory material about third parties. The lawyer ultimately settled out of court and paid substantial damages to the victim for ‘political deepfake’. Similarly, data protection law can protect individuals whose data have been unlawfully used to create the deepfake or whose likeness has been represented without adequate legal ground. Nevertheless, even this framework provides some challenges, as it might not cover all potential situations of harm created by deepfake.

It is the case with copyright too. For instance, copyright protects sound recordings. However, if the tone/intonation of one’s voice is used to create a voice clone, without copying one’s recording, there will be no recourse under copyright, as a ‘work’ has not been infringed. Similarly, performers' rights will be out of play if the digital replicas have not been created from the recording of a performance.

Finally, considering the public debate on the “boundaries” of copyright and ethical uses reignited by the “ghiblification” trend, it is worth recalling that style, is not protected by copyright, including under UK law (and for good reasons).

EU

In addition to the general frameworks (such as the GDPR, the Unfair Commercial Practices Directive), there are two recent pieces of regulations in Europe that tackle deepfakes expressly.

First in a chronological order, the Digital Services Act (DSA) imposes specific duties on providers of online services to act against illegal content. Queries have been raised about the notion of illegal content and the normative power delegated to platforms, which might create a risk of over-blocking or over-removing licit materials.

There are at least other two obligations in the DSA that are relevant to deepfake: 1) online platforms have a duty to design their online interface in a way that does not deceive, manipulate, or distort the decision-making process of users (Art. 25), which can also happen through synthetic media; more directly, very large online platforms shall ensure that any information “whether it constitutes a generated or manipulated image, audio or video that appreciably resembles existing persons, objects, places or other entities or events and falsely appears to a person to be authentic or truthful” is clearly distinguishable and shall provide means to allow the recipient to label the content accordingly. This latter obligation, however, only applies to platforms with more than 45 million users per month in the EU.

Kat-boundary-deepfake

Second, the EU AI Act, adopted in June 2024, have formally introduced the first legal definition of deepfake along the lines of the DSA formulation: “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful” (Art. 3.60). This definition has the merit of providing some boundaries to the phenomenon (at least for the purposes of the law). However, it has raised interpretative issues among some scholars. For instance, the definition expressly refers to content that resembles ‘existing persons’. Whether it only protects a person who is still alive, or rather, a person who existed whether alive or deceased or even a fictional one is something that must be clarified.

In terms of specific obligations, the Act explicitly targets deepfakes by mandating transparency obligations for providers of AI generating synthetic audio, image, video or text content (Art. 50.2), and deployers of AI-generated or manipulated content (Art. 50.4).

Even if not expressly addressed to deepfakes, the AI Act prohibits certain practices that can be perpetrated via deepfakes, for instance manipulative or deceptive technique able to influence a person’s decision-making process or cause significant harm (Art. 5.1.a). Whilst the EU should be commended for addressing such issues, the prohibition of manipulative or deceptive practices outlined in Article 5 of the Act is quite narrowly designed. This means there is potential for deepfakes to cause harm for situations not expressly outlined in the Act, like deepfake extortion schemes. Moreover, as also Ducato and Mendis cautioned, AI systems generating deepfakes are not automatically classified as high-risk under the EU AI Act, meaning that some harmful applications might escape the most stringent rules devised for AI systems.

In Part 2, we will explore developments in the Asian systems, notably Japan, Korea, and China.

***
Do you want to reuse the IPKat content? Please refer to our 'Policies' section. If you have any queries or requests for permission, please get in touch with the IPKat team.
Reply all
Reply to author
Forward
0 new messages