Out-Law / Die wichtigsten Infos des Tages

MEPs have moved to regulate ‘foundation models’ under the proposed new EU Artificial Intelligence (AI) Act, citing their “growing importance to many downstream applications and systems”.

Under proposals adopted (144-page / 919KB PDF) by the Internal Market Committee and the Civil Liberties Committee at the European Parliament, providers of foundation models – a term defined by the MEPs as an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks – would face a series of new obligations over the way they are designed and developed, and in relation to the data they use.

For example, providers would face a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development”, as well as draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model to meet their own legal obligations.

Providers of foundation models would further be required to meet obligations around data governance, ensure “appropriate levels” of performance, predictability, safety and cybersecurity, and conform to a range of sustainability standards. They would also need to register their foundation models in an EU-wide database.

Providers of ‘generative’ foundation models, which are used in AI systems specifically for the purposes of generating content, would face further obligations. These include providing transparency over when content has been created by an AI system and not a human, and making publicly available a sufficiently detailed summary of the use of training data protected under copyright law.

The obligations on providers of foundation models would apply regardless of whether the model is provided on a standalone basis or embedded in an AI system or a product.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

In a world where 150,000+ open-source models are accessible by any AI developer across the world, managing risk and regulating outcomes of use … seems to be the only plausible way forward

Technology law expert Luke Scanlon of Pinsent Masons said that the move to regulate foundation models comes amidst the growing attention being given to ChatGPT, Bard and other systems powered by foundation models. He questioned whether the approach the EU is taking to regulation is the right one, however.

Scanlon said: “The EU legislators have largely focussed on matching existing product liability regulatory frameworks and process with the regulation of AI. This approach is arguably at odds with the ways in which AI models are developed.”

“In a world where 150,000+ open-source models are accessible by any AI developer across the world, managing risk and regulating outcomes of use, rather than treating the model as a ‘product’ and including requirements designed to check that it fits stagnant conformity rules, seems to be the only plausible way forward,” he said.

“In UK financial services, regulators had a clear focus on model risk management for some time, and the regulatory approach towards AI has been to promote model risk management, regulate the use of data, and ensure there is robust governance within an organisation developing AI. The EU approach is different. It is seeking to define what an ‘AI system’ is and develop a process through which that productised system could undertake a detailed conformity assessment. The idea being that if it passes the rigorous conformity assessment, then it has a seal of approval and can be put to use. They now intend to apply this approach to foundation models in addition to ‘AI systems’,” he said.

Proposals for a new EU AI Act were set out by the European Commission in April 2021 and have since been the subject of intense scrutiny by the EU’s main law-making institutions – the European Parliament and the Council of Ministers.

Rauer Nils

Dr. Nils Rauer, MJI

Rechtsanwalt, Partner

One of the most pressing questions companies and administrative bodies are currently addressing in the context of ChatGPT and other AI-based applications is about the future regulatory framework

The Commission’s proposals seek to regulate AI in accordance with the level of risk those systems are deemed to present to people. Under its plans, AI that poses an “unacceptable risk” to people would be prohibited, while the bulk of the regulatory requirements would apply to ‘high-risk’ AI systems, including obligations around the quality of data sets used, record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. ‘Low-risk’ AI systems would be subject to limited transparency obligations.

“The proposed amendments to the Commission’s initial draft are very timely”, said Frankfurt-based Nils Rauer of Pinsent Masons, who specialises in advising businesses on digital transformation. “One of the most pressing questions companies and administrative bodies are currently addressing in the context of ChatGPT and other AI-based applications is about the future regulatory framework governing this field of digital transformation.”

Rauer said there has been debate on the need for a sustainable regulatory framework for AI for years, describing calls made by entrepreneurs such as Elon Musk and Steve Wozniak for there to be a pause on the development of ever-more-powerful AI solutions, to enable safety protocols and governance frameworks to catch up with the rate of innovation, as a notable recent intervention.

Rauer also highlighted that questions of whether and how generative AI models such as ChatGPT could be brought within the scope of the EU-level AI regulation were recently debated in a session held by the standing committee on digitisation in the Bundestag (link in German), Germany’s parliament.

Cameron Sarah

Sarah Cameron

Legal Director

Many in the tech industry and beyond prefer the UK’s principles-based approach, with interpretation and implementation left to vertical regulators focusing on the context and use, rather than the EU’s horizontal, cross-sector rules focused on levels of risk around specific systems – in what is a more tech-focused approach

With their proposals, the two European Parliament committees are seeking to add specific further obligations for foundation models. In general, foundation models will not be classed as ‘high-risk’ AI systems – unless they are “directly integrated in [a] high-risk AI system”.

Beyond proposing amendments relating to foundation models, the MEPs suggested extending the list of AI uses that would be prohibited under the AI Act. They also proposed amendments to criteria for ‘high-risk’ AI systems – the systems would have to pose a significant risk to harm people’s health, safety, or fundamental rights to be categorised in this way, under their proposals.

Providers would be obliged to notify regulators if they did not think their systems pose a ‘significant risk’, with the potential for penalties to be issued if systems are put into use but are subsequently found to have been misclassified.

The MEPs have also proposed making the obligations for high-risk AI providers much more prescriptive, notably in relation to risk management, data governance, technical documentation and record keeping. In addition, a completely new requirement had been proposed for users of high-risk AI solutions to conduct a fundamental rights impact assessment considering aspects such as the potential negative impact on marginalised groups and the environment.

The committees’ draft also provides for overarching principles to apply to all AI systems. Those principles are human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; and social and environmental wellbeing.

Technology law expert Sarah Cameron of Pinsent Masons said the inclusion of the overarching principles suggest EU law makers are taking notice of the approach other policymakers are taking globally on the issue of AI regulation.

“Many in the tech industry and beyond prefer the UK’s principles-based approach, with interpretation and implementation left to vertical regulators focusing on the context and use, rather than the EU’s horizontal, cross-sector rules focused on levels of risk around specific systems – in what is a more tech-focused approach,” Cameron said.

“Some of the changes proposed in the European Parliament’s latest draft just might point to a softening in approach or a nod to the different, lighter touch, principles-based approaches emerging in other countries such as UK, US, Singapore and Japan, and the need for collaboration and interoperability,” she said.

“For example, a new preamble that the OECD-based definition should be closely aligned with the work of international bodies working on AI to ensure legal certainty, harmonisation and wide acceptance is notable, as is the adoption of overarching principles applicable to all AI systems, and the raising of the bar for what qualifies as a high-risk system. It is vital that we do see effective and productive co-operation at the international level if we do want to see a pro-innovation, confidence-building approach that is navigable by AI developers and users, particularly as the EU may well find it is not setting the global standard for AI regulation as it did with GDPR,” Cameron said.

The proposals of the two parliamentary committees are expected to be adopted by full European Parliament in a vote scheduled to take place between 12 and 15 June. Once the Parliament adopts its position, it will be ready to open so-called trilogue negotiations on finalising the text with the Commission and the Council of Ministers, which adopted its own draft text in late 2022.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.