[Guest Post] Likely to confuse: the IPO battling AI authority hallucinations

54 views
Skip to first unread message

Oliver Fairhurst

unread,
Feb 4, 2026, 8:33:25 AM (7 days ago) Feb 4
to ipkat_...@googlegroups.com

[Guest Post] Likely to confuse: the IPO battling AI authority hallucinations

IPKat is pleased to host the following guest post from Thomas Hood (Gatehouse Chambers) on the now-perennial issue of using AI to draft submissions, and the common issue with tools referring to case law that either does not exist or is not supportive of the proposition for which it is relied. 

Over to Thomas:

The appointed persons of the UK Intellectual Property Office had to tackle several cases across 2025 where eyebrows were raised over the legitimacy of documents prepared for courts. Arising from the use of AI, the cases predominantly involved litigants in person. In each case, the tribunal looked at the context of the generative material and why it was relied on. This article will explore these cases to point to a need for the UKIPO to evolve over 2026 to provide a robust framework for how AI can be permitted for ethical use in trade mark matters.

Four Warnings on AI Use

From open admittance to outright denial, let’s have a look at four matters from last year:
  1. BL O/0559/25 Pro Health Solutions Ltd v ProHealth Inc:  In this case heard on 20 June 2025, Phillip Johnson as the Appointed Person held at [27] that use of AI was understandable by litigants in person, as they “will know little about trade mark law and think that anything generative artificial intelligence creates will be better than they can produce themselves.” However, in this first case, he relied on Ayinde, R(On the Application Of) v London Borough of Haringey [2025] EWHC 1383 (Admin), a case where a pupil barrister had relied on hallucinated case law stemming from her utilisation of AI to emphasise that for the purposes of any tribunal there is significant risk of using AI. Johnson held at [28] that the IPO should consider adopting a warning whether used by litigants in person or by a professional.
  2. BL O/0938/25 Warwick Econometrics Ltd v University of Warwick: In this case, reliance was made on a case allegedly called Kingsland Global Ltd v BML Properties Ltd. No case with that name could be found. When Mr Bickford-Smith, the individual relying on the authority, was asked whether AI had been used for his skeleton he said that he had not. However, following the hearing, it was admitted that Kingsland was not a real authority and that AI summaries had been relied on. Unsurprisingly, Mr Bickford-Smith was both criticised in the decision and forced to pay costs.
  3. BL O/1013/25 Orthofix S.R.L. v OscarTech UK Ltd: This matter was before N. Rhea Morris on 30 October 2025, who like in Warwick, was faced with two decisions that were not accurate. However, the distinguishing point here is that (a) the applicant admitted use of the AI tool (set out at [106]) and withdrew reliance on the cases. However, Morris still took the opportunity to reinforce Johnson’s view of AI usage, setting out at [107] that “even litigants-in-person have a duty to not mislead the court [or tribunal] and, in observing that duty, they are urged to be alert to the risks associated with the use of ‘ChatGPT’ and the like.”
  4. BL O/1141/25 Onyinye Udokporo v Enrich International Ltd: Finally, at the tail end of 2025, Phillip Johnson on the 5 December had an early Christmas present: another opportunity to criticise fabricated case law references. The alleged authorities were Combit Software GmbH v Commit Business Solutions [2014] EWHC 3605, proposing that even a minor consonant can prevent a likelihood of confusion, and Speciality European Pharma Ltd v Doncaster Pharmaceuticals Group Ltd [2015] EWHC 2556, allegedly concerning the marks NOVA and SUPERNOVA. Neither case was real, allowing Johnson to make one further statement on AI at [36]: “Litigants-in-person who put their name to a document before the registrar or Appointed Person must be able to provide all the material cited by them and that material must relate to what they are saying, and likewise any quotation they rely upon must be accurate.” Johnson put a final flag in the ground: material proffered by a litigant in person, lawyer, or any other person must be real and accurate.

Opposing hallucinations with procedure

Where do the above four cases leave us? The reality is that the use of AI in preparations for proceedings, particularly by litigants in person, will be an ever-increasing reality. Matthew Lee, Barrister at Doughty Street Chambers, has put together a tracker, currently listing 37 cases where use of AI across the courts and tribunals of England and Wales has been publicly noted in judgments and decisions ([here], accessed on 02 February 2026). Without a robust framework, the UKIPO may find that 2026 sees a further increase in the use of AI to prepare for hearings, from oppositions to invalidity applications.
It would be appropriate for the UKIPO, given the risks that AI poses to the conduct of proceedings to consider issuing a specific practice note on the use of AI in contentious matters. This would assist both those before the IPO as litigants in person, as well as lawyers, and the relevant individual who has the hearing before them. However, no such guidance has yet to be provided by the IPO ([here], accessed on 02 February 2026).
Yet, there is already precedent which could be relied on and adapted. The Lady Chief Justice’s office released guidance for the Judiciary in England and Wales, crucially stating that the material produced by AI “should always be checked against maintained authoritative legal sources” ([here], accessed on 02 February 2026). Similar guidance could be provided by the UKIPO, with clear consequences set out if checks of citations are not made, such as a specific fee payable for breach of a rule as part of the costs payable. This could deter the unchecked use of AI in UKIPO proceedings.
Bringing in clear guidance for proceedings before the UKIPO could encourage ethical and responsible use of AI. Those who cannot afford professional legal advice should be able to use it as a tool to assist them. At the same time, a framework of use would emphasise that the power of AI comes with responsibility to ensure that material is accurate.
Do you want to reuse the IPKat content? Please refer to our 'Policies' section. If you have any queries or requests for permission, please get in touch with the IPKat team.
Reply all
Reply to author
Forward
0 new messages