Out-Law News 2 min. read
20 May 2024, 3:59 pm
The UK Information Commissioner’s Office (ICO) has set out its strategic approach to artificial intelligence (AI) regulation, listing AI’s application in biometric technologies, protection of children’s privacy and online tracking as its three focus areas for 2024-25.
Although there have been some recent indications that the UK government is actively considering AI legislation, the official position remains as set out in a March 2023 AI White Paper.The paper noted that “rigid and onerous” legislative requirements on businesses could hold back AI innovation. Instead, the existing regulators, such as the ICO, will issue principles on a non-statutory basis using domain-specific expertise to the context in which AI is used.
Malcolm Dowden, data protection and privacy expert at Pinsent Masons, said: “In practice, this means that existing regulators are having to find the limits of their sectoral remits and regulatory reach and determine when, and how, they must learn to cooperate with others.”
The ICO is one of 13 UK regulators to publish its strategic approach to regulating AI (22-page / 270KB PDF) ahead of the 30 April deadline set by the government. In its 2023 ‘white paper’ on AI regulation, the government set out its vision for a “proportionate, pro-innovation approach to AI regulation” aimed at allowing businesses to harness the opportunities while safeguarding against potential risks of AI technologies.
The ICO’s approach includes plans to seek views on how biometric classification technologies, such as those used to draw interferences about people’s emotions and characteristics, should be developed and deployed.
The regulator also intends to consult on updates to its guidance on AI and data protection, as well as automated decision-making and profiling, by spring of next year. This will allow the ICO guidance to reflect changes made to data protection law following the passage of the Data Protection and Digital Information Bill, which is currently before parliament.
The ICO’s approach includes plans to support a number of AI-related projects, including a system to help prevent falls in the elderly, personalised AI for those affected by cancers, AI to help identify individuals who may be at risk of domestic violence, and AI used to remove personal data from drone images.
The regulator plans to issue reports on the outcomes of engagements with providers of AI recruitment solutions, highlighting that AI practices will form part of future audits looking at technologies within the education sector and even in services such as youth prison services.
Other UK regulators published their own AI regulatory strategies in the weeks leading up to the 30 April deadline. These are the Bank of England; the Competition and Markets Authority (CMA); the Equality and Human Rights Commission (EHRC); the Financial Conduct Authority (FCA); the Health and Safety Executive (HSE); the Legal Services Board (LSB); the Medicines and Healthcare products Regulatory Agency (MHRA); the Office for Nuclear Regulation (ONR); Ofsted; Ofcom; Ofgem; and Ofqual.
The FCA’s approach (26 pages / 400 KB) incorporates a summary of its work so far as well as plans for the next 12 months. The FCA intends to scrutinise the systems and processes firms have in place and has set out plans to use an evidence-based view to ensure regulatory expectations are met.
The CMA’s strategy lists forthcoming changes to the authority's powers as well as details on its work with others on AI issues. The CMA also addresses competition risks posed by AI alongside its understanding of these risks and how they can be tackled. This includes concerns about potential risks to fair, open, and effective competition posed by AI foundation models – the models' underlying systems that power AI applications.
All the updates issued will help shape the government’s adaptive approach as part of the work to ensure an effective universal framework.
The government has announced various measures to support regulators in their approach. This includes £10 million of funding aiming to help jumpstart regulators’ AI capabilities, new guidance on how to begin implementing the cross-sectoral AI principles, proposals for a steering committee to oversee the development of the regulatory frameworks, and additional funding for a new cross-regulatory hub offering support to AI innovators.