At the last count, there were more than seventy companies offering AI-assisted IP software solutions. Most of these companies are less than two years old. Also, if their marketing materials are to be believed, they have plenty of big IP-firm clients. But do IP firms understand what they are actually buying and whether it is actually worth the cost, both in terms of the short-term risks and long-term strategy?
The difference between LLM-wrappers and foundational models
For firms contemplating spending a lot of money on third-party AI-assisted IP software providers, it is crucial to understand what these companies are actually offering. All of the AI-assisted IP software companies, without exception, have built a tool that uses one or more foundational LLMs such as Gemini, Claude or OpenAI's GPT models. These companies do not need to develop their own models, because the underlying models are now so good. Additionally, building and running an LLM of any competence requires an extraordinary amount of compute and vast quantities of money.
None of the foundational labs have anything less than 1 million GPUs or TPUs (Google's own specialist AI chips) at their disposal. For reference, one supercomputer has a mere 10-100k GPUs, and the UK only has nine official supercomputers. Anthropic, for example, has access to one million of Google's TPU chips and additional compute power from both Amazon and Nvidia.
The amount of compute required to train and run an LLM also requires a lot of money and investment, and the big LLM labs also have an extraordinary abundance of this. Using Anthropic again as an example, this is a company devoted solely to LLM development, and it has a staggering post-money valuation of $380bn. A significant proportion of this investment goes into compute power to train the Claude models.
By contrast, the resources available to AI-assisted IP software companies are tiny. None of these companies have any GPUs, but they also don't need them. In fact, it has never been easier to launch an LLM-based IP solution. What AI-assisted IP software companies do have is VC funding, some software engineers and the ability, like everyone else, to use the frontier LLMs (which are admittedly, these days, very good). Instead of building their own LLM, the AI-assisted IP software companies have therefore very sensibly built wrapper systems on top of Gemini, Claude or ChatGPT. For most AI-assisted IP software companies, there is thus no "closed system", or separate model "trained" on legal data (or, if there is, it is likely to be still worse than the frontier models at most tasks). All there is, is a snazzy user interface, code and domain-specific prompting, all built on top of the underlying LLM. The fact that there are now so many of these AI-wrappers, is an illustration of just how easy it is to build a solution on top of an existing LLM and market it as a product (IPKat).
Why do we care?
There are a number of reasons why it is important to understand that your AI-assisted IP software solution is just an LLM-wrapper. The most basic reason is that you should know what you are buying (and whether you actually need to buy it). Additionally, there are questions over the potential increased risks, questions over the stability of the legal tech industry, and whether by opting for the apparently easy solution of an off-the-shelf tool, firms are sacrificing quality and the opportunity to upskill their attorneys.
The AI Trojan horse
First, IP firms and in-house departments should have their eyes open with respect to what they are paying for, and whether it is necessary. Interestingly, it is clear that many of the LLM-wrappers are prepared to offer extended trial periods and discounted services for IP firms, presumably at least partly because the LLM-wrapper company can learn a lot from firms themselves, in terms of what works well, what services and products are in demand, and what workflows work best. They may even be benefiting from the legal expertise of the attorneys using the tool, which they can then incorporate into the software.
![]() |
| Wrappers |
The LLM-wrapper companies can use what they learn from the patent firms to improve their product. Once it is good enough, they may then pursue an alternative and more lucrative business model of selling IP services and solutions directly to in-house departments, cutting out the IP firms altogether. After all, why pay a law firm to use a tool you could just use yourselves (IPKat)?
More layers, more risk
The use of AI must always be treated with caution in the IP industry and adding an LLM-wrapper company into the mix will inevitably increase the risk. As patent attorneys, we deal with highly confidential material and it is now widely recognised that putting an invention disclosure into a free public version of an LLM could destroy the novelty of an invention. You therefore need to ensure that, not only the agreement between you and the LLM-wrapper protects your confidentiality, but also that you do your due diligence on the arrangement between the LLM-wrapper and the LLM provider (IPKat). Additionally, IP firms need to consider how much AI output is being stored from these providers (either locally or externally), and the risks of retaining this information (IPKat).
Additionally, whilst the frontier labs have the highest level of security that you can probably get in the industry (if Google is hacked, we're all in trouble), smaller software companies are inevitably going to be less secure and subject to more security risk. Just recently, for example, one of the coding libraries that allows software companies to launch multiple LLMs, LiteLLM, was compromised with a malicious payload. LiteLLM is a popular Python library that allows software developers to use many different AI model providers (OpenAI, Anthropic, etc.) through a single interface. When uploaded onto users' systems, the malware was able to harvest passwords, cloud credentials and everything else sensitive on the machine, and silently send it to the attacker. The malware was only identified because it also had a bug that caused the infected system to crash. Crucially, the attacker didn't need to trick anyone into downloading something dodgy, they just slipped malicious code into a package that many software developers were using and trusting.
As an IP firm, the more you are using third-party software companies, and the more inexperienced those companies are, the more you open yourself up to the risks that your provider may be compromised, and in turn, that they compromise you.
Here today, gone tomorrow
There is a growing belief that the legal AI tech market is a bubble. There are too many companies, all effectively selling the same product wrapped in slightly different wrapping paper. Another risk with becoming over-reliant on one or only a few AI-assisted IP software companies is therefore the fragility and effervescent nature of the current market.
You don’t want to partner with a company, only to find it disappears in 12 months' time. This is a particular risk for firms that have white-labelled an LLM-wrapper with their own firm-specific branding. Advertising yourself as expert users of an LLM-wrapper puts you at risk of the rug being pulled out from under your feet.
Of course, another risk with this approach is that the AI-assisted IP software company learns everything from you about what a great AI-assisted IP software solution needs to be, is bought by a big player in the inevitable market consolidation, and then sells their legal services direct to large in-house departments, as per the current big legal-tech business model.
Are you sacrificing quality in favour of the path of least resistance?
Finally, the primary and overarching factor determining how good these solutions are, is the capabilities of the underlying LLM. When evaluating these companies, IP firms and in-house departments need to benchmark accordingly. There are huge differences between the capabilities of the models. The best are also the most expensive, and the LLM-wrapper companies will generally be paying per token. If your LLM-wrapper is not performing well, it is highly likely that you have been pushed onto a subpar model. Copilot is a classic example of this.
Copilot no longer uses Microsoft's own LLM, but instead now uses OpenAI models. These days, Copilot apparently has access to some of the best ChatGPT models (at least in the premium versions of Copilot). However, as anyone who has used Copilot knows, Copilot is remarkably rubbish compared to other frontier LLMs, including the underlying ChatGPT models that Copilot is supposedly using. Part of this might be admittedly because of the annoying user interface Microsoft has put in the way (Microsoft has form here: anyone remember the Word paperclip...?). This is an example of how a wrapper can actually decrease the quality and efficiency of an underlying AI.
More worrying, however, is the lack of transparency one gets when using Copilot. The best AI models are expensive, especially if you want priority access so that you are not stuck in a long queue that slows things down. It is highly likely that, during busy periods or for tasks with very high token use, Copilot users are being directed to less-good models. However, the user is currently given no visibility or control over this. Given how cost-conscious AI-assisted IP software companies are likely to be (their cash reserves are probably a little less than Microsoft's), the same problems (and solutions) probably apply. They are probably paying per token of use of the LLM, and the more expensive the model, the more expensive the tokens. But is this reflected in what the user pays? A question to ask your LLM-wrapper company of choice.
Final thoughts
An LLM-wrapper may very well have its place in your firm. The benefit is that these solutions can be rolled out firm-wide with relatively little upskilling required in the use of AI. However, you need to be clear what it is you are actually buying, and whether you are getting more from your AI-software provider than you are potentially giving in return.
Acknowledgements: Thanks as always to Mr PatKat (Head of Reasoning, Mistral) for his invaluable AI-industry insights, and particularly for the LiteLLM scoop.
Further reading