The IPKat has received and is pleased to host this guest contribution from Katfriends Dinusha Mendis (Centre for Intellectual Property Policy & Management (CIPPM) Bournemouth University, UK), Rossana Ducato (University of Aberdeen, UK), and Tatsuhiro Ueno (Waseda University, Japan). In Part 3 of this four-part series, the authors provide an insight into the impact of synthetic media in the creative and technology sectors. In particular, Part 3 of this series focuses on the film and music industries before providing perspectives from AI developers and online platforms. The findings are drawn from two stakeholder roundtables hosted in Japan and UK and funded by the Daiwa Anglo-Japanese Foundation. For an overview of the use, impact and adoption of deepfake technology and how it is being tackled in UK and EU, see Part 1 of this series. Part 2 provided a comparative view of the law and policy perspectives from Japan, South Korea, and China.
Deepfake technology and the law: Perspectives from the creative and technology sectors (Part 3)
by Dinusha Mendis, Rossana Ducato, and Tatsuhiro Ueno
Perspectives from the creative industries
Perspectives from the creative sector were represented by
Benjamin Field (Deep Fusion Films),
Liam Budd (Equity UK) and
Victoriano Darias (International Federation of the Phonographic Industry, IFPI). All three spoke of the benefits brought about by synthetic media but also highlighted the cost and challenges presented by deepfake technology.
Film industry
Benjamin Field, spoke about how the film industry has gradually adapted to synthetic media during the film-making process. This has ranged from de-aging actors and actresses as seen in the film
‘Here’ (2024) by
Metaphysic.ai to bringing back-to-life notable icons and famous individuals such as
Ian Holmes who passed away in 2020 and starred in the film Alien (1979). Similarly, Field’s company Deep Fusion Films brought back-to-life Gerry Anderson for the film
‘Gerry Anderson: A Life Unchartered’ (2022).
 |
Participants during the stakeholder roundtables
|
The common thread between the two is that both Metaphysic.ai and
Deep Fusion Films are market leaders in the ethical use of AI within the creative process. For both films, consent was sought from the respective estates of the deceased individuals.
That is why, Field emphasised the need to build an ethical-first framework for utilising synthetic media in films, highlighting a core focus on transparency and responsible use as crucial aspects for both commercial success and audience trust. Additionally, Benjamin Field and his company advocate for clear disclaimers at the beginning of films and TV shows that incorporate such technology to help build consumer confidence.
Performers
Liam Budd from Equity UK, a trade union that represents performers highlighted the growing concerns faced by performers who fear that their work may be exploited to train current or future training algorithms. For example, during the roundtable, he talked about how Stephen Fry’s recordings of seven volumes of Harry Potter novels recorded back in 2023 was utilised to train and produce a deepfake audio clone of the British TV presenter for use in the Hay Festival 2025. Yet, this was something unknown to Fry.
Similarly,
Megumi Morisaki (President of the Association of Arts Workers Japan) stressed how digitalisation and deepfake can impact a working environment, which is already strained due to freelance contracts and minimum wage not applied. She also presented a recent survey they conducted in Japan, which demonstrated that 93.8% of performers feared rights violations by AI and 91.9% worried about unauthorised use of their face, voice or works, and 2,099 respondents reported infringements of their IP and moral rights She further noted that another survey showed that only 2.6% of music and video performers of Internet have enforcing written contracts, which leaves many rights unprotected.
Furthermore, there have been campaigns by performers to stop or prevent such practices from being adopted, most notably
“Stop AI Stealing the Show” by Equity UK. Through this campaign, Equity UK promoted collective bargaining agreements, similar to what was seen with the recent Hollywood strikes. These collective bargaining agreements advocate time-limited licensing to prevent future exploitation of likeness without pay; mandatory informed consent from performers in order for their likeness to be re-used and fair compensation for when their likeness is used.
Music industry
Senior Global Legal Policy Advisor Victoriano Darias from IFPI addressed the use of deepfake technology within the music industry. Darias provided some positives of deepfake and AI, such as allowing artists to expand their audiences by transplanting and publishing their songs into different languages, negating the language barrier that often distances audiences from other countries. Additionally, Mr. Darias explained that such technology may also be used to benefit disabled artists like Randy Travis to create new music; as well as for
bands with deceased members to publish new songs.
 |
Who is the real one and who is the deepfake? |
Similarly,
Aritomo Nakamachi (Amuse Inc.) presented a few cases where deep learning technology has been employed to support creative uses, e.g. of late
Hibari Misora’s voice and
Osamu Tezuka’s manga to generate respectively new songs and works. However, he warned about how easy it has become to infringe artists rights thanks to the advancement and availability of this technology, leading to profits for the platforms but not for the rightsholders. In particular, Nakamachi pointed out how AI-generated cover songs on platforms like TikTok and YouTube provide revenue to those creating the AI-generated cover songs thereby bypassing the actual artists. This practice especially impacts voice actors, as Nakamachi explained.
Additionally, the representatives from the music industry highlighted the various approaches in different jurisdictions in protecting voice, image, name, identity and likeness and the challenges relating to the transferability of rights. In relation to personality rights the speakers concluded by calling for greater clarity and stronger enforcement with carefully scoped legislation to protect the personality of artists, especially in jurisdictions where such rights are lacking.
Perspectives from the technology sector
From the point of view of the technology sector,
Jennifer Williams (University of Southampton / The Alan Turing Institute) gave an insight into the work that she and her team have conducted in detecting deepfakes, which is increasingly and rapidly becoming harder to do. Williams referred to the UK Home Office
‘Deepfake Detection Challenge’ which was hosted in July 2024. The objective of the event/competition was to develop a tool to assist humans in deepfake detection.
The Southampton Audio Forensic Evaluator (
SAFE and Sound), developed by Williams and her team at the University of Southampton examined six different aspects of audio: human perception, vocal tract, emotive speech, background noise, reverberation and high-frequency anomalies outside of human hearing. At the event, SAFE achieved an incredibly high score and demonstrated how with only a single second of a video clip, it was able to detect whether the audio was artificial or human. It further demonstrated its capacity to localise exactly where the deepfake within an audio-clip was occurring, even if it was spliced into real audio. As Dr Williams further discussed, the team behind SAFE developed the system to be accessible to those without technical expertise, providing its analysis of the sample audio in 1 – 3 seconds in an easily digestible format. For example, if the audio includes deepfake technology, SAFE is able to identify it through inconsistencies such as high-frequency anomalies.
From the industrial sector,
Sonia Cooper (Microsoft UK),
Kotaro Kajimoto (Microsoft Japan),
Adrienn Timar (Google Europe) and
Masato Nozaki (GREE Group) provided similar perspectives. They all spoke of the responsibility of these platforms to ensure trust, transparency and accountability for its users.
Nozaki from GREE spoke about balancing platform control with user freedom, especially in the case of metaverse deepfake avatars. Indeed, the metaverse can raise different regulatory challenges also depending on its structure. For instance, Nozaki pointed out at the distinction between “closed” metaverses (centralised and owned by one entity) and “open” metaverses (decentralised a much more similar to a physical environment), which allow different types of controls over users’ actions.
Approach by AI developers and online platforms
Sonia Cooper and
Kotaro Kajimoto described how Microsoft advocates for the “positive sum outcome” for AI and the creative industries whilst also addressing the harm caused by deepfake technology. With this in mind, Microsoft has developed various strategies to mitigate the risk of generating problematic outputs using AI, whilst also providing it as a helpful tool to encourage creativity.
Microsoft uses images that are publicly available and as Cooper cautioned, to place undue restrictions on the analysis of such legally accessed data, would severely limit the models’ capability. In terms of contrast to harmful deepfake, Kajimoto highlighted that Microsoft joined the “Tech Accord”, a cross-sector agreement to detect and counter harmful AI content, particularly in elections, and pointed to technical filters and mechanisms used by Microsoft to block or filter such content. As Kajimoto expressed, protecting content authenticity, detect and respond to deepfake, education and awareness campaigns are part of Microsoft’s strategy to combat abusive AI-generated content, and called for policy actions in relation to this.
In addition to this, Cooper distinguished between different strategies to address unauthorised commercial use of those in the public eye and the creation of harmful/abusive images relating to them. In the case of commercial use of audio content, Microsoft has developed a rigid consent verification system for voice replicas via
Azure AI Speech. This tool requires proof of consent from the individual for their voice to be used. For visual content, Microsoft uses watermarking (C2PA) for verification purposes and to build trust thereby providing a mechanism to identifying AI generated content online. To combat the creation of harmful/abusive imagery, Microsoft champions strong laws against child exploitation and intimate imagery so that their AI technology can be used for promoting safety amongst all users.
Adrienn Timar then spoke about Google’s position in relation to AI and how they, like Microsoft, are also addressing the harms which this technology gives rise to. As Timar explained, Google is an AI-First company and leverages AI for beneficial applications such as: aiding individuals with speech impairments, custom voice models, simplifying complex information. With the growing concerns of deepfakes and the ease of their creation, she clarified that Google has employed further cybersecurity measures, safety criteria and reporting/flagging systems for synthetic content across its products. Additionally, for Google’s in-house AI system Gemini, Google now offers users the ability to double-check the AI responses (which is C2PA compliant) so that users can identify content origin. This provides for more transparency for the user, who can more easily discern whether what they are reading or watching is credible.
Moreover, Google has implemented further measures to identify AI generated content through their
SynthID tool – which embeds digital watermarks directly into the AI generated images, text, audio or video that is invisible to users. This is particularly relevant to Google’s most popular video sharing platform, YouTube, on which the policy has been updated, requiring all creators to label any content that employs this technology as AI-generated content. For users who are non-compliant, Google uses their open-source detection tool to identify such users and where relevant, penalise them by removing the content or in some cases (depending on the nature and frequency of use), terminating their account.
In the next and final blogpost, we report the key takeaways from the perspective of policymakers and civil society representatives. The series will conclude with some thoughts for the future whilst outlining a way forward.