Rethinking creative fairness under the UK’s new automated decision-making rules

28 views
Skip to first unread message

Georgia Jenkins

unread,
Feb 12, 2026, 3:39:43 AM (4 days ago) Feb 12
to ipkat_...@googlegroups.com
Last summer, the UK Parliament was in full swing debating the Data (Use and Access) Act (DUAA) 2025. Like other copyright aficionados, this Kat followed along debates, namely Baroness Kidron's speech on the impact of the Act on creative industries, specifically copyright law and training AI models. Indeed the demand for the government to ‘make it fair’ was plastered across most newspapers making it near unavoidable to miss. While the Act passed in June 2025, the copyright provisions now require the Secretary of State to publish an ‘economic impact assessment of options raised in the AI and Copyright Consultation’ such as licensing, opt-out and transparency measures (IPKat here and here). As the deadline quickly approaches, it is worth reflecting on other amendments, notably those to automated decision-making (ADM) and the ways it could reshape how creators, platforms and rightsholders interact in increasingly AI-driven markets.

Photo by Neakasa on Unsplash
This Kat is pawsing to inspect the inner workings
of Spotify's new monetized ecosystem

A lighter touch to ADM post-Brexit

Previously Article 22 UK GDPR comprised a near-total ban on solely automated decision-making, including profiling, that produced legal effects similarly significant effects for data subjects. However Section 80 DUAA replaces Article 22 with a suite of new rules that appear to relax restrictions on fully ADM.

These rules apply to decisions that are: (1) ‘based solely on automated processing’ and (2) 'significant’. The former is defined in the Act as meaning that the decision process lacks ‘meaningful human involvement’. To determine whether there is meaningful human involvement, a person ‘must consider, among other things, the extent to which the decision is reached by profiling’. A decision is significant if, in relation to the data subject, it ‘produces a legal effect for the data subject’ or ‘has a similarly significant effect for the data subject’.

Significant decisions based ‘entirely or partly on processing’ special categories of personal data (Article 9(1) UK GDPR) are allowed if the data subject gives explicit consent, or the decision is necessary to enter into, or perform a contract between the data subject and controller. This must be evaluated against Article 9(2)(g) UK GDPR that allows such processing based on a ‘substantial public interest’. Further the Act restricts decisions if they are based on processing carried out for the purposes of Article 6(1)(ea), a new lawful basis for processing personal data deemed ‘necessary for the purposes of a recognised legitimate interest’.

This special category of decisions requires the controller to ensure that safeguards for the data subject’s rights, freedoms and legitimate interests are in place. These include the provision of information relating to the decisions to the data subject, give the data subject the opportunity to make representations to the controller in relation to such decisions, and to enable the data subject to contest such decisions.

Lastly, the Secretary of State can introduce regulations relating to ‘meaningful human involvement’, ‘similarly significant effect’, appropriate safeguards; all subject to the affirmative resolution procedure.

A ‘missed opportunity’ for creative fairness?

However these new rules largely fail to address how ADM systems affect visibility and remuneration in creative industries, particularly music. The 2023 report by the UK Centre for Data Ethics and Innovation found that algorithmic recommendation systems in music streaming can reinforce popularity bias, privileging mainstream content while marginalising emerging and independent artists. The question is whether the impact of these decisions falls within these new rules.

For now, it remains unclear whether recommendation systems and content moderation decisions would qualify as ‘solely automated’ and ‘significant’ despite ADM systems directly influencing creators’ revenue streams. Safeguards only apply on the condition that the decision is based entirely or partly on personal data and solely on automated processing. Some suggest that ‘performance data’ which relates to facial features, bodily features, movement, and voice should qualify as personal data. Even if this is the case, the Act’s focus on individual redress mechanisms makes any challenge almost insurmountable. In practice, it would seem that creators can contest unfair automated outcomes only after harm occurs.

That being said, Spotify’s new ‘monetized ecosystem’ which integrates ChatGPT to ‘increase’ discoverability, presents a new fact pattern where recommendation systems are based on users’ personal data. This builds on Spotify’s ‘Discovery Mode’ where labels and artists ‘signal’ priority tracks in exchange for a lower royalty rate, to the detriment of independent artists. It also mirrors TikTok’s Sound Search and ‘For You’ feeds where user behaviour is used to measure how many milliseconds users watch a video before swiping. These ‘personal data profiles’  increase opacity with a critical impact on independent artist livelihoods, entrenching inequitable music distribution (IPKat here).

It would seem that automated decisions related to music recommendation and content moderation are based on a combination of user data and performance data. For example, when users skip or increase volume on Spotify this is processed as a ‘biometric proxy’ for the creator’s work. So when an algorithm interprets these behavioural signals and categorizes a song as 'low-engagement,' it likely creates a 'performance tag' on the creator's profile (see ‘Musica Analytica: The Datafication of Listening’. Yet as rights generally belong to the data subject, it is likely that platforms will claim that decisions derived from ‘user preferences’ cannot be contested by creators.

Comment

These new rules, specifically the safeguards, only apply when ADM is based on processing special categories of personal data. It means that content moderation and music recommendations that reduce visibility and detrimentally affect artistic livelihoods can occur without a human in the loop. Without further legislative activity, such a narrow privacy lens ultimately excludes broader cultural issues of ADM within the creative industries (e.g. bias against independent artists and niche genres). Despite framing this as a boost for innovation, the existing legislation entrenches existing power asymmetry between platforms and creators.

When the Act was discussed last summer, there were repeated attempts to include transparency amendments (e.g. public registers or audits) namely for disclosure of training materials. Such an approach may be helpful to move beyond individual-led claims that are reliant on ICO escalation, and to safeguard cultural pluralism by addressing systemic algorithmic bias within creative industries. Similarly to the EU’s Digital Services Act (DSA) which requires large platforms to conduct ‘systemic risk assessments’ (Article 34), diversity audits and public registers are good starting points to identify these power imbalances.

As we await the government’s economic impact assessment, it might be worth pawsing to reflect on an increasingly complex algorithmic landscape within creative industries. One where transparency is less a remedy than a symptom of deeper systemic inequality. This Kat observes that if the law continues to permit platforms to perform ADM based on users’ behavioural biometrics and creators’ performance data, it will ultimately exclude some forms of cultural production and reinforce structural inequities in how creative labour is valued and made visible.
Do you want to reuse the IPKat content? Please refer to our 'Policies' section. If you have any queries or requests for permission, please get in touch with the IPKat team.
Reply all
Reply to author
Forward
0 new messages