The UK government has abandoned plans to broker an industry-led agreement on a new AI copyright code of practice, a voluntary framework the government hoped would strike a balance between AI developers’ desire to access quality data to train their AI models and content creators’ right to control – and commercialise – access to their copyrighted works.
The government is, however, continuing to engage with stakeholders on both sides of the debate and has indicated that an alternative non-statutory solution could emerge, built on greater transparency over the data developers use to train their AI models. It has said it plans to “set out further proposals on the way forward soon”.
News of the breakdown in the industry talks over an AI copyright code was confirmed by the government on Tuesday morning in its AI white paper response paper – a response that provides an insight into the UK’s approach to AI regulation – after the Financial Times reported on Monday that the plans for the code had been “shelved”.
A working group, featuring representatives from the technology, creative and research sectors, was convened by the UK’s Intellectual Property Office (IPO) last summer with a view to developing the code. Out-Law reported last August that the working group was working towards an autumn 2023 agreement. That timeline slipped, but as winter set in the government continued to express hope that it would be able to conclude work on the code before the end of 2023. Out-Law understands, however, that the working group last met as a whole on 12 October 2023.
A spokesperson for the IPO told Out-Law that while the group “provided a valuable forum for stakeholders to share their views”, the discussions had “been challenging”.
Gill Dennis
Senior Practice Development Lawyer
The government’s current focus on achieving greater transparency rather than on limitations on the use of copyright works is unlikely to comfort content creators – some of whom have already shown their willingness to enforce their IP rights against AI developers before the courts
In its AI white paper response, the Department for Science, Innovation and Technology (DSIT) confirmed that it has abandoned hope of achieving an industry-led solution – but that it will continue to liaise with representatives from the AI and creative sectors to find a workable alternative.
DSIT said: “Unfortunately, it is now clear that the working group will not be able to agree an effective voluntary code.”
“DSIT and DCMS ministers will now lead a period of engagement with the AI and rights holder sectors, seeking to ensure the workability and effectiveness of an approach that allows the AI and creative sectors to grow together in partnership. The government is committed to the growth of our world-leading creative industries and we recognise the importance of ensuring AI development supports, rather than undermines, human creativity, innovation, and the provision of trustworthy information,” it said.
While it remains unclear what form the government’s new approach will take, DSIT said it would look to ensure that there is a future mechanism put in place to enable content creators to establish what data AI developers have used to train their AI models.
“Our approach will need to be underpinned by trust and transparency between parties, with greater transparency from AI developers in relation to data inputs and the attribution of outputs having an important role to play,” DSIT said. “Our work will therefore also include exploring mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as an input into AI models. The government wants to work closely with rights holders and AI developers to deliver this. Critical to all of this work will also be close engagement with international counterparts who are also working to address these issues. We will soon set out further proposals on the way forward.
Michelle Donelan
UK secretary of state for science, innovation and technology
We think we have a pathway forward that will particularly focus around transparency, but we don’t want to rush and announce something that damages either one of these sectors
Speaking at a House of Lords Communications and Digital Committee evidence session on Tuesday afternoon, Michelle Donelan, the UK secretary of state for science, innovation and technology, was reluctant to share more details about what solution the government is now working towards, but she did confirm that the talks there had been within the AI copyright code working group had identified some “commonality” on the issue of transparency.
Donelan said: “We will continue to work with both sectors. We think we have a pathway forward that will particularly focus around transparency, but we don’t want to rush and announce something that damages either one of these sectors, so even if it means taking a little bit longer to get these things right, we think that is the right approach.”
Donelan told the committee that the government has not ruled out legislating “to achieve the desired outcome” if that cannot be achieved via non-statutory solutions. She also emphasised the importance of achieving international consensus on the issue of AI and copyright and added that she intends to raise the issue with her US counterpart, secretary of commerce Gina Raimondo, at their next meeting.
Cerys Wyn Davies, an expert in AI and copyright law at Pinsent Masons, welcomed that the government is looking for greater transparency from AI developers in relation to data inputs and the attribution of outputs, but she said it will be vital for the government to explain soon “whether in addition to transparency and attribution, any commercial and therefore financial recognition is to be awarded to rights holders”.
She added: “It is great to see that the government recognises it is critical to ensure close engagement with international counterparts working to address these issues. This will promote greater consistency and certainty for rights holders and AI developers alike who will want to commercialise and deliver their products internationally.”
Gill Dennis, also of Pinsent Masons, said: “It was always going to be a challenge for stakeholders with such diverging interests to agree. AI developers want unrestricted access to as much data as possible but creative content owners want just the opposite, as protecting royalty income is critical for them. The government has always said that it wants to achieve a balance between protecting the creative industries and encouraging innovation in AI, but the government’s current focus on achieving greater transparency rather than on limitations on the use of copyright works is unlikely to comfort content creators – some of whom have already shown their willingness to enforce their IP rights against AI developers before the courts.”
The government previously contemplated extending the existing text and data mining exception in UK copyright law to support AI developers but stepped back from implementing reforms after pushback from the creative industries. The government has subsequently come under increased pressure to do more to protect the interests of copyright holders in respect of AI development, notwithstanding its desire to support innovation in AI.
The latest example of this was evident in the publication of a new report by the House of Lords Communications and Digital Committee.
In a report published on Friday, prior to the news about the AI copyright code plans being abandoned, the House of Lords Communications and Digital Committee said discussion on the development of that code should not go on “indefinitely” and that, if the process was not resolved by this spring, the government “must set out options and prepare to resolve the dispute definitively, including legislative changes if necessary”.
The committee also said that AI developers should be required to “make it clear whether their web crawlers are being used to acquire data for generative AI training or for other purposes” and for there to be “a mechanism for rightsholders to check training data” used by the developers to “provide assurance about the level of compliance with copyright law”.
With the emergence of large language models (LLMs), the committee said it considers that the current copyright framework in the UK is failing to ensure content creators are rewarded for their efforts, that others are prevented from using copyright works without permission, and that innovation is incentivised. It said the government “has a duty to act” and that it should not wait for the courts to develop sufficient case law that answers questions on how AI development interacts with copyright law.
The committee called on the government to “publish its view on whether copyright law provides sufficient protections to rightsholders, given recent advances in LLMs” and to set out options for legislative reform if it “identifies major uncertainty”.
The committee’s report highlighted the differences of opinion between some AI developers and content creators on copyright matters: some publishers are concerned about how their content is being used by AI developers to train their large language models (LLMs) and the impact this could have on independent journalism and the production of quality news; while AI developers cited their right to conduct text and data mining of “publicly available and legally accessed works” without a licence and warned about how limiting the data AI systems can access could lead to poorly performing or biased models and less benefit for users.
To support AI developers, the committee said the government should also work with licensing agencies and data repository owners to “create expanded, high quality data sources at the scales needed for LLM training” – and to further “use its procurement market to encourage good practice”.