[The IPKat] [Guest post] Deepfake technology and the law: Perspectives from Japan, South Korea, and China (Part 2)

17 views
Skip to first unread message

Eleonora Rosati

unread,
Dec 26, 2025, 1:09:44 AM (6 days ago) Dec 26
to ipkat_...@googlegroups.com

[Guest post] Deepfake technology and the law: Perspectives from Japan, South Korea, and China (Part 2)

The IPKat has received and is pleased to host this guest contribution from Katfriend Dinusha Mendis (Centre for Intellectual Property Policy & Management (CIPPM), Bournemouth University, UK) who has co-authored this four-part blog series with Rossana Ducato (University of Aberdeen, UK) and Tatsuhiro Ueno (Waseda University, Japan). Part 2 of this four-part series explores the challenges and opportunities presented by synthetic media and presents policy and legislative responses from a comparative perspective, looking at notable models such as Japan, South Korea and China. The findings are drawn from two stakeholder roundtables hosted in Japan and UK and funded by the Daiwa Anglo-Japanese Foundation. For an overview of the use, impact and adoption of deepfake technology and how it is being tackled in UK and EU, see Part 1 of this series.

Deepfake Technology and the Law: Perspectives from Japan, South Korea, and China (Part 2)

by Dinusha Mendis, Rossana Ducato, and Tatsuhiro Ueno

Japan

At the moment, Japan’s does not specifically contain any provisions to deal with deepfake technology. The law already in place can tackle some issues, but only to a certain extent. For example, as discussed by Kunifumi Saito, the defamation criminal offence can be applied in principle to deepfake pornography. However, the offence requires an impact on the social reputation of the victim, which might not be affected if the video is clearly fake. Similarly, Kaori Ishii pointed out that legislation such as the Act on the Protection of Personal Information and the Notice regarding the use of GenAI services issued by the Personal Information Protection Commission in 2023, did not directly address deepfakes, and the remit of data protection law might be too limited (for instance, if we consider that provisions like the prohibition to use personal information in a way that can lead to an unlawful or unjust act, Art. 19, and the prohibition to obtain personal information by deception of wrongful means, Art 20, are addressed to businesses and not the general public). Masaru Terui, who is representing Japanese billionaire and influencer Yusaku Maezawa in litigation against Facebook Japan and Meta for allowing unauthorised use of his pictures in investment-scams ads (claiming a symbolic 1 Yen in damages), pointed to the lack of clarity concerning the current platform liability regulation and stressed the challenges faced by deepfake’s victims, who have to bear a high burden of proof, identify all potential and recurring illegal ads, and eventually claim damages, which are usually awarded up to a maximum of 3 million yen (£14,000). A quite low ceiling that in practice discourages people from taking legal action, leading to “loss of trust in the legal system”.

Participants during the 
stakeholder roundtables

In Japan the most powerful tool to tackle deepfake might actually come from personality rights, including portrait rights and the right to publicity. Although the latter is not formally codified, it has been introduced by courts and finally recognised by the Supreme Court in the notorious 2012 Pink Lady case. The litigation centred around the use of a monochrome photograph of a famous duo of female performers for a magazine article titled “Pink Lady de diet”. The claimants, the singing duo, sued for infringement of right of publicity against the weekly magazine publisher, claiming the use of their names and likeness was unjust and was used to merely gain profit from the appeal it would bring to the consumer. While the Supreme court expressly recognised the existence of the publicity right under Japanese law, it also set its boundaries. For instance, in the case, it ruled in favour of the defendant, the weekly magazine publisher, finding that the right of publicity cannot be infringed by commercial use when used for the purpose of news reporting or broadcasts. Instead, the courts “stipulated that unauthorised use of an individual’s likeness, even if it possesses customer appeal, could be tolerated as a legitimate form of expression in certain cases”, going as far as to only consider infringement when the use was predominantly determined on the basis of customer appeal.

Although, the case was decided in favour of the defendant, it established the scope of publicity rights in Japan. In particular, it limited the scope of the right, whilst clearly defining when infringement is likely to occur in the context of the media and commercial content; “emphasizing the need for a distinct and direct commercial exploitation for it to be considered an infringement.”

However, as Tatsuhiro UenoSatoshi Narihara and Kunifumi Saito discussed, whilst this recognition by the Supreme Court helps individuals protect their own likeness and can be applicable to the infringement perpetrated via deepfake, one of the main loopholes remains post-mortem protection; when the likeness of deceased individuals is used without permission. The difficulty lies in the non-transferability of these rights which mean, once a person has died, their personality rights cannot be invoked by another person, family or otherwise.

South Korea

In contrast to the UK and Japan, South Korea has recognised statutory provisions for protecting the right of publicity of an individual. In response to the rise in deepfakes, South Korea has been quick to address the legal gaps through amendments to existing legislation as well as the ratification of new legislation. In 2021, the Unfair Competition Prevention Act was amended to protect celebrity names, portraits and likenesses. Additionally, the Public Offices Election Act 2023 prohibits the use of deepfake technology of any individual to be utilised in election campaigns.

More recently, in 2024, the Sexual Violence Punishment Act, and later the Youth Protection Act 2025, were amended to further protect individuals against the harm caused by deepfake pornography by criminalising the production, distribution, possession and viewing of deepfake pornography including content involving minors, even without the intent to distribute. Similar to the EU, South Korea has also ratified its own legislation surrounding AI titled the Basic AI Act. However, as illustrated by Yeyoung Chang, it is far narrower than the EU's AI Act as South Korea’s Act is primarily “built on a framework of promoting trust rather than Europe’s detailed product regulation designed to ban risks”.

Most recently, South Korea amended the Personal Information Protection Act in 2025, following amendments in 2023. The 2025 amendment makes it mandatory for oversees businesses with local subsidiaries to designate them as domestic agents, thereby strengthening their compliance of South Korean laws.

Moving forward, proposals for a Portrait Property Rights in South Korea indicate a move towards a more comprehensive personality right that better reflects the modern era and better serves to protect individuals online whilst aiming to prevent unauthorised exploitation of individuals portraits, aligning it with constitutional rights regarding personal image and privacy.

China

In China, individuals’ likeness and voice are protected as personality rights under the civil code (amended in 2021). As such, this protection is guaranteed to every physical person, whether they are celebrities or not, and some post mortem protection is also offered to the relatives of the deceased. In case of violation of the personality right, the Chinese system allows to claim non patrimonial and patrimonial damage. Despite this broad protection, Wanli Cai warned about the challenges to substantiate infringements cases with evidence, e.g. proving that individual’s voice or likeness have been effectively used by the AI system.

Kat deep(fake?) in thought

Data protection law might be another potential route. Interestingly, he noticed that it extends post mortem and it is particularly convenient in terms of burden of proof, as it will be for the defendant to provide evidence that they have not violated the personal information protection law (PIPL).

The protection afforded by the PIPL and personality rights in the context of deepfakes has been already put to test in two recent cases. Cai reported that, the Internet Court of Beijing found the violation of the personality right in an AI-generated voice (as the timbre of the voice actor was clearly recognisable). In a second case concerning face-swapping, the same court excluded the violation of the image right, because the generated portrait was not “recognisable” by the public. Nevertheless, they found an infringement of the PIPL due to the unlawful processing of video footages. Interestingly, Professor Cai concluded reporting that between 2023 and 2024 the Internet Court of Beijing has accepted 113 cases for alleged violation of the PIPL. There is no doubt that this amount of litigation will help providing clearer guidelines in this matter.

In next blogpost, we will consider the impact of deepfake technology in the creative and technology sectors, reporting the views of representatives from the film and music industries, on the one hand, and from AI developers and online platforms, on the other.

***
Do you want to reuse the IPKat content? Please refer to our 'Policies' section. If you have any queries or requests for permission, please get in touch with the IPKat team.
Reply all
Reply to author
Forward
0 new messages