given their multi-cloud set-up. Jokes about Silicon Valley vow renewals aside, this statement quietly re-affirms a new legal order based on Microsoft’s exclusive licence over access to IP related to OpenAI models and products, agreement that Azure remains the sole infrastructure provider for ‘one-off’ OpenAI requests by third parties, and related revenue streams.
It follows OpenAI’s
new funding ($110 billion from Amazon, Nvidia and SoftBank) and a
separate cloud deal with Amazon Web Services (AWS). This means that
OpenAI Frontier, an
enterprise platform for ‘building, deploying and managing teams of AI agents’, will be hosted on Microsoft’s infrastructure (Azure), but accessible through providers like AWS and OpenAI. In response to these new developments, OpenAI CEO Sam Altman
explains that:
AI is going to happen everywhere. It's transforming the whole economy, and the world needs a lot of collective computing power to meet that demand.
The takeaway is that the world will access AI through OpenAI’s products and services, once it has enough power.
The multi-cloud framework
Naturally, Microsoft agrees. In 2020, Microsoft and OpenAI entered into an agreement granting the former exclusive access to
GPT-3, an advanced LLM trained on Azure’s infrastructure. Five years later, Microsoft began
positioning itself as the exclusive licensee for ‘OpenAI IP’ rights referring to OpenAI as its ‘frontier model partner’. This is despite OpenAI excluding their
consumer hardware products from the agreement. It seems that Microsoft’s IP rights relate to research
including ‘models intended for internal deployment or research only’. It also has non-Research IP rights
relating to ‘model architecture, model weights, inference code, finetuning code, and any IP related to data center hardware and software’.
Additionally, Microsoft became the one-stop shop for all ‘one-off’ third party requests to OpenAI. Referred to as
stateless APIs, these requests contain all the necessary information required to process them without the server storing any session-related data. Though OpenAI
assures readers that customers and developers will ‘benefit from Azure’s global infrastructure, security, and enterprise-grade capabilities at scale’, Azure (meaning Microsoft), will become the default option. Most businesses integrate AI through stateless APIs because they
are ‘more scalable, easier to maintain, and capable of handling large volumes of traffic with reduced resources consumption’ (e.g.
Uber's ride search and price estimation). More complex (‘stateful’) requests, likely agentic workflows, can be developed and deployed
through other third parties like AWS (e.g.
Amazon Bedrock).
The statement also
affirms previous revenue-sharing agreements. While Microsoft receives revenue from OpenAI’s agreements with other third parties such as AWS (leaked reports
indicating that it is around 20% of OpenAI’s total revenue), it is linked to two possible end dates: 2032 or achieving artificial general intelligence (AGI). This
means that once an AI system can perform most or all cognitive tasks at or above human level, contractual rights over access, hosting and revenue are triggered.
Previously OpenAI could contractually declare AGI, but now this must be
verified by a ‘panel of independent experts’. Yet AGI is
infamously difficult to define. Even Altman has
commented that ‘it’s not a super useful term’ as there are ‘multiple definitions being used’. Despite these concerns, once AGI is verified or by 2032, most of Microsoft’s IP rights will expire except for access to ‘models post-AGI, with appropriate safety guardrails’. Perhaps more akin to an unregulated form of SEP licensing, this clause ties duration to a technological event determined through a private mechanism with no public accountability. It reflects the
idea that IP law is shaped by changes in ‘technological feasibility’; something that both Microsoft and OpenAI are busy instrumentalising.
The future regulatory-copyright-licensing gap
While national and regional governments, particularly the EU, are busy responding through transparency and due diligence obligations to support a licensable market (
AI Act and
Code of Practice), Microsoft and OpenAI have already drawn the boundaries of future markets behind closed doors. Markets where AI development is governed by contract, not public law. The partnership forecasts a market where models extend beyond
AI Act categories, and then determines the scope of IP rights, infrastructural access, and revenue streams. This framework entrenches asymmetrical inequalities within current legislative and regulatory approaches that fail to compensate creators. Only last week the EU Parliament
published a report evidencing:
the widespread violation of copyright rules by GenAI providers, including the unauthorised collection of works from the internet, the non-compliance with rights holders’ text and data mining reservations, the use of pirated sources to obtain works, and the failure to seek licences.
The latest round of OpenAI funding exemplifies the commercial value extracted from training data, all premised on the promise that AGI is eventually possible. It is not lost on this Kat that the post-AGI models are likely more capable of replicating human creativity. Even further, that authors are currently excluded from the development of a post-AGI model market whose progress is only visible through self-serving (and perhaps strategically timed?) press releases.
Comment
This arrangement represents a green flag for investors as it
responds to concern over ‘OpenAI’s growing web of cloud partnerships’. It walks a delicate line between ensuring that OpenAI has access to the compute necessary to achieve AGI, while safeguarding Microsoft’s investment without becoming the subject of competition-based concerns (e.g.
Microsoft/OpenAI partnership merger inquiry).
The success of IP law and cloud contracting, as a form of private AI and data governance, is already
well-documented. Its use here signals OpenAI’s structural and commercial resilience. While authors are generally left without compensation, investors have some assurance that future markets are legally secured. Given the cost of inference, training and infrastructure (OpenAI’s internal documents
predict $14 billion loss in 2026), the $110billion investment is clearly crucial. Interestingly it also returns, in part, to the same cloud providers OpenAI depends upon.
Investors seem to be placing their bets on the infrastructure as much as the technology in the hope that AGI materialises and secures them a stake in future markets, lest OpenAI become a house of cards. How this balances against the
112 global copyright lawsuits against AI companies, is anyone's guess. But this Kat predicts that there will be little left to support authors and human creativity without radical change. Regulators face a quickly changing market that requires ingenuity and a certain degree of crystal-ball gazing.
Altman has
forecast that AGI ‘can break capitalism’, meaning that one needs to consider ‘how the profits of AGI are shared, how access to it is shared, and how governance is distributed’ which according to him requires ‘new thinking.’ Wouldn’t it be great if this ‘new thinking’ took account of the collective value of data within society? Responding to entrenched and asymmetrical data power and enclosure surely requires collective action. This Kat considers a few of these ideas in her latest
working paper.