The author of this insight was a Research Intern at the Indian Society of Artificial Intelligence and Law.

The UK economy is driven by many creative industries, including TV and film, advertising, performing arts, music publishing and video games contributing nearly 124.8 billion GVA to the economy annually. The rapid development of AI over the recent years has sparked a debate globally and within the UK about various challenges and opportunities it brings. It led to massive concerns within the creative and media industries about their work being used to train AI without their permission and media organizations not being able to secure remuneration to licensing agreements. There has also been a lack of transparency from the AI developers about the content that is being used to train the models while these firms also raise their own concerns about the lack of clarity over how they can legally access the data to train the models. These concerns are hindering AI, adoption, stunting innovation, and holding back the UK from fully utilizing the potential AI holds. The UK government consultation document highlights the need for working in partnership with both the AI sector and media sector ensuring greater transparency from AI developers, to build trust between developers and the creative industry.
Focus Areas of the Consultation
The key pillars of the UK government focused on the approach to copyright and AI policy include transparency, technical standards, contracts and licensing, labelling, computer generated works, digital replicas and emerging issues. The government aims to tackle the challenges with AI in terms of copyright by ensuring that AI developers are transparent about the use of training data for their AI models. The government seeks views on the level of transparency required to ensure that there is a trust built between AI companies and organisations in the creative industry. Establishing technical standards will help improve and standardise the tools, making it easier for creators and developers to exercise their reserving rights. Moreover, licensing frameworks need to be strengthened to ensure that the creators receive fair remuneration while the AI developers also get access to necessary training material. Labelling measures help distinguish the AI generated content from the human created work, which will foster clarity for consumers. Additionally, the protection of consumer generated work needs to align with modern AI capabilities so that fairness is ensured. Finally, addressing digital replicas, such as deepfakes is essential to protect individuals’ identity from misuse.
