Getty Images v Stability: Long-awaited judgment rejects majority of Getty's claim

101 views
Skip to first unread message

Oliver Fairhurst

unread,
Nov 4, 2025, 3:51:49 PMNov 4
to ipkat_...@googlegroups.com
In one of the most anticipated judgments of the past few years, the High Court of England & Wales has handed down its judgment in Getty Images v Stability AI. The Court (judgment delivered by Mrs Justice Joanna Smith DBE) has for the most part rejected Getty's claim, finding only a "historic and extremely limited" trade mark infringement. We will, I expect, be publishing more on the case over the coming weeks, but this is this Kat's initial reaction.

The result will be a disappointment for the creative industries, some of whom accuse the developers of LLMs (large language models) as having pillaged the collective creative endeavours of humanity to develop a competing product. Meanwhile, it will be a great relief to tech enthusiasts and the stock markets, who have held generative AI up as the next industrial revolution, having the potential to transform economies, science and society. The moral rights and wrongs, and what AI means for human civilisation, is however beyond the scope of this post.

The number of issues was huge, and to arrive at a judgment that is 219 pages long (including the annexes, 205 without), in 4 months, is a remarkable achievement. The amount of work put in by the respective parties and their legal teams was also astonishing, and they all deserve a big pat on the back for that effort. 

The judgment is a dizzying document, containing a huge number of very specific terms, from strings of v.[n]s, acronyms, defined terms and terms such as "Hugging Face". This Kat would do well to summarise the judgment in 20 pages, let alone the 2,500 words of this post, so what follows is a best effort in drawing out the key points. 
Isn't it amazing how much these tools have improved...?!


Background

The case is well-known and has been covered extensively both in this blog and elsewhere (see hereherehere, and here). Readers are encouraged to read those earlier posts on the case, but in short, Getty Images (Getty) was suing Stability AI (Stability) over (1) the use of Getty Images' database of images and associated text to train the generative AI (diffusion) model that trades as Stable Diffusion, and (2) the outputs of that model, some of which which resembled Getty's content and even included versions of Getty's watermark. 

By the conclusion of the trial, Getty's case had narrowed considerably. Most notably, it had dropped the copyright infringement claim in respect of the training of Stable Diffusion as the acts of training were accepted to have taken place outside of the UK, as well as its copyright claim in the outputs and its database rights claim. 

That left two main claims: (1) the outputs said to infringe Getty's trade marks, and (2) the question of whether Stable Diffusion is itself an 'infringing copy' having been allegedly trained on copyright materials. There were other issues, but they are not covered here. Getty's case relied heavily on a limited number of examples it had strived to identify, which Stability criticised as being "contrived". 

Aside from some limited success on the trade mark infringement claim, Getty's claim failed. 

Copyright infringement - the outputs

Getty abandoned its copyright infringement claims in respect of the training of Stable Diffusion and the outputs of similar images. This was, respectively, due to the lack of evidence of training taking place in the UK, and the steps taken by Stability to block the prompts that it was alleged had been used to generate similar outputs.

This left a claim for secondary copyright infringement, namely that Stability has imported, possessed in the course of business, sold or let for hire, or offered or exposed for sale or hire, or distributed an article, being Stable Diffusion, which is and which Stability knew or had reason to believe is an infringing copy of Getty's copyright works. This claim was subject to an earlier unsuccessful attempt by Stability to strike it out on the basis that Stable Diffusion cannot be described as "an article" ([2023] EWHC 3090 (Ch)).  

The crux of the issue is whether Stable Diffusion was an article, the importation of which was prohibited on the basis that its making in the UK would have involved infringement in the UK. The idea behind this provision is that unauthorised copying of copyright works is illegal in the UK, and copying content outside the UK and then importing, dealing (etc.) ought also to be prohibited. 

To understand the reasoning, it is necessary to understand what an AI model is not. It is not a repository of the the data on which it was trained. No copies of the training data are stored in the model itself. Stability's expert explained that one dataset used to train Stable Diffusion was around 220TB , whereas the model weights could be downloaded in a 3.44GB file (1TB=1,000Gbs). It works by learning patterns in the training data, and encoding those patterns numerically in what are known as model weights. 

Images of cats or dogs associated with words like 'cat' and 'dog' teach the model what a cat or dog looks like. Once trained, it can accurately recreate what a picture of a cat or dog ought to look like without reference to the images on which it was trained. 
  
The term 'article' is undefined in the statute, and the judge commented that it is a word which takes its colour from its surrounding context. For example, an article of a constitution or news article. The question of whether information stored could render a memory chip string it an 'infringing copy' has been considered before in Sony v Ball [2004] EWHC 1738 (Ch)., but that did not really address the question here of whether the information itself could be an 'article'. Applying the 'always speaking' principle of statutory interpretation (which allows statutes to develop with change), the judge agreed with Getty that the term 'article' includes intangible information.  

But, the infringing copy must nevertheless be a copy, and that is where Getty's success ended. The model weights "are purely the product of the patterns and features which they have learnt over time during the training process". The models did not contain a copy of any works, and therefore could not constitute an infringing copy. 

Trade mark infringement - the outputs

Did the watermarks appear in the Stable Diffusion outputs?

Getty alleged that the models that generated images did so, in some cases, including Getty's trade marks. It alleged, and the judge accepted, that the relevant date for assessing that infringement was at the date on which the relevant models were released (Levi Strauss & Co v Casucci SpA (C-145/05)). Stability had accused that approach, rather than the date on which the particular sign was used, to be "undisciplined", but the judge found it to be "the only sensible approach in the very unusual circumstances of this case".

Stability at argued that Getty needed to prove that watermarked images appeared 'in the wild' as opposed to in response to Getty's "contrived" prompts. The judge agreed to determine the "threshold issue" of whether any user of any version of Stable Diffusion in the UK has ever been presented with a watermark on a synthetic image generated by the model. 

Stability accepted that the appearance of watermarks in outputs was "non-trivial" in its earlier v.1x models. These are the early models that were less carefully tuned than the later ones (v.2.x, SD XL and v.1.6). 

The experts agreed that watermarks (or distorted versions of them) were produced by Stable Diffusion due to the fact that the model was trained on images containing Getty's watermarks, and that depending on the prompts, those watermarked images would be generated with "high frequency"as a result of 'memorisation' and 'overfitting', those being a bug not a feature. As an aside, there is an example of the President of the United States being shown behind bars in the judgment, which is amusing in a document released by an English High Court judge. 

The judgment goes in great detail into the extent to which users might prompt images that could feature watermarks, how they might do so, and whether they would want to. While there was some evidence that users of other generative AI platforms might use Getty's captions to generate free alternatives, there was instead evidence that users of Stable Diffusion did not. Having found that there was no evidence that "even a single user in the UK" had used verbatim prompts obtained by copying the Getty caption, or even reworded versions of those prompts, still less that an image containing a watermark had been generated as a result.

Crucially, the court found that there was some evidence of non-contrived and representative 'in the wild' generation of watermarked images in Stable Diffusion's earlier versions, but not in the later ones.  Stability had fixed the bug. The judge found that at least one user of some of the earlier versions of Stable Diffusion, accessing those versions through certain channels, will have seen a Getty Images and iStock watermark.

In relation to the later models, the judge found that there was "not one jot of evidence" on which she could properly find on balance that watermarks have ever been generated in the UK by real world users. She held that if evidence was available, Getty "had every opportunity to find it", pointing to a decision by Getty not to pursue disclosure from one dataset on costs grounds. The lack of real world examples was, as explained below, fundamental to the on-the-whole rejection of Getty's claim, but one might have some sympathy with the task Getty and its lawyers faced in identifying such evidence, not to mention cost. 

Did Stability "use" signs identical/similar to Getty's trade marks?

Stability said it was not using the signs at all, but was only providing the model. It said it was in a position akin to the defendants in DaimlerGoogle France and Coty. Getty maintained that Stability was engaged in active behaviour to generate those signs as it had trained the model, could filter out watermarked images, makes the model available and makes the communications to the users. The judge agreed with Getty on this point, rejecting Stability's attempt to put responsibility for the signs on the user. This finding was strongly supported by evidence that users did not ask for and could not prevent the appearance of watermarks in at least some of the versions of the model. 

While specific to these unusual facts, this discussion around the responsibility of a platform for the acts of its users is a never ending source of interesting litigation, and one that continues as technologies develop.  

Were the signs 'used' identical to the marks?

The judge found that the iStock watermarks appearing in the Stable Diffusion outputs were identical to the trade marks. While there was evidence of identical signs to the Getty Images marks being used, those were in experiments rather than 'real world' use, and were therefore disregarded. The judge found that the average consumer would be led to believe that there was some commercial connection between the watermarked images and Getty, which while more normally a likelihood of confusion question, indicated that Stability was using the signs. 

Accordingly, there were no relevant examples of identical signs to the Getty Images marks used by Stability, and the s.10(1) claim failed. (There were images generated with the partial prompt "news photo" that the judge held that Getty had failed to make submissions at trial, and it was therefore too late and "unsatisfactory and inappropriate to expect the court to undertake such an exercise".) 

Are the images produced by Stable Diffusion identical goods/services as those for which Getty's marks were registered?

Getty sought to argue that Stability's goods were "photographs". That was rejected as not being pleaded and "far too late" to advance that case in closing. In any event, the judge held they were not 'photographs' but were instead "synthetic image outputs", which encompassed "digital imaging services","downloadable digital illustrations and graphics", "digital media", and "images". The consideration of what an image created by AI will be very relevant to those practitioners that are drafting specifications both for human-made and computer-made content.

Was there double identity infringement?

The judge found that there was double identity infringement in respect of the iStock watermarks only, and only in relation to some of the earlier models. There was no infringement of the Getty Images marks at all, and no infringement of the iStock watermarks in relation to later versions of the models as there was no real world use.

Was there a likelihood of confusion under s.10(2)

For similar reasons as to those for which the judge found that Stability had 'used' the signs, i.e. that consumers would assume a connection (e.g. licensed use of the Getty image database), the judge found infringement under s.10(2) in relation to the iStock watermarks for the earlier versions. The judge did agree that the Getty Images marks had been infringed on this point on the basis that some of the blurred/distorted watermarks were similar. 

Was there unfair advantage/dilution under s.10(3)?

This claim failed at the 'change of economic behaviour' stage. Stability sought to knock out the point by arguing that Getty had failed to plead that there was a change in economic behaviour. This is one of those cases where parties are held to a very much higher standard by virtue of the large size of the case than those parties in smaller cases that get away with much more brief pleadings. The judge was understandably reluctant to determine this element of the case on a pleading point.

She nevertheless rejected the claim for two main reasons. One, the evidence of real world use was so limited that there was no basis to infer, as the court was asked to, that there was a "proliferation" of such uses, let alone a proliferation of tarnishing images. Two, users looking to obtain an un-watermarked version of a Getty image, and trying to circumvent Getty's requirement for a licence by using Stable Diffusion, might be disappointed to then find a watermark. In addition, there was no evidence of this pattern of behaviour. As I have said before, this is a bug vs feature issue. 

Conclusion


This judgment covers a huge number of issues, from trade mark law to statutory and contractual interpretation. It is also a case that will have cost the parties enormous sums of money. It seems very possible that there will be an attempt to appeal aspects of it. However, appeals are not necessary to make this case interesting or take important points from it.

A recurring criticism of Getty's trade mark claim was that it could not evidence enough 'real world' examples of its trade marks being used. That may be because it was rare, or it may be that the information was not readily available (c.f. a 'typical' trade mark claim). One can have sympathy for Getty in having examples of what was an infringement of its trade mark rights, but not enough of them. 

The fundamental point this Kat takes is that for all of the perceived rights and wrongs of the process by which generative AI tools have been developed, rightsholders have great difficulty in preventing or monetising their emergence. 
Do you want to reuse the IPKat content? Please refer to our 'Policies' section. If you have any queries or requests for permission, please get in touch with the IPKat team.
Reply all
Reply to author
Forward
0 new messages