Out-Law / Die wichtigsten Infos des Tages

The use of artificial intelligence (AI) systems by organisations in the EU is to be subject to risk-based regulation, under new legislation provisionally agreed by EU law makers at the weekend.

The proposed new EU AI Act agreed on by representatives of the European Parliament, the Spanish presidency of Council of Ministers and the European Commission in the latest trilogue negotiations, would prohibit certain uses of AI and impose requirements on both general-purpose AI (GPAI) systems and ‘high-risk’ AI systems.

Because no consolidated text has yet been published, it is not yet clear precisely what the wording of the AI Act will be. It is a political agreement on the core elements of the statutory regulation that has been reached. The technical transformation into the final language of the act is yet to follow. Ultimately, both the Parliament and Council have still to formally adopt the text – a vote by each institution is necessary for that purpose before the AI Act can become EU law. Most provisions will only begin to apply two years after the legislation has come into force, although the bans on prohibited AI will take effect six months after enactment and new transparency requirements a further six months thereafter.

Statements issued by the Parliament and Council following their deal do provide some insight into some of the provisions that have been agreed, however.

For so-called general-purpose AI (GPAI) systems, and the GPAI models they are based on, new transparency requirements will apply. The Parliament said that “these include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training”.

The Parliament added that “high-impact GPAI models with systemic risk” will face “more stringent obligations”: “If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the [European] Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.”

The Council said that a “a wide range of high-risk AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market”, under the AI Act. ‘High-risk’ AI use will be subject to “a mandatory fundamental rights impact assessment”, while other requirements will also apply, such as in relation to data quality.

Among the AI systems that will be banned under the new legislation are biometric categorisation systems that use sensitive characteristics, such as political, religious, philosophical beliefs, sexual orientation, or race. AI that engages in untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, will also be prohibited, as will AI used for emotion recognition in the workplace and educational institutions; for social scoring based on social behaviour or personal characteristics; and AI systems that manipulate human behaviour to circumvent their free will. AI used to exploit the vulnerabilities of people, due to their age, disability, social or economic situation, will also be prohibited, the Council said.

Businesses could face fines of up to €35 million or 7% of their annual global turnover if they use AI in a way that is prohibited. For breaching other obligations, fines of up to €15 million or 3% of annual global turnover could be imposed.

Limitations to and exclusions on the scope of the legislation have been agreed by the law makers, however. For example, an exception for law enforcement agencies that has been agreed means they will be able to use AI-based biometric identification systems in public spaces for law enforcement purposes under certain conditions. The new rules will not apply to AI systems used for the sole purpose of research and innovation, for military or defence purposes, or to people using AI for non-professional reasons.

Proposals for a new EU AI Act were set out by the European Commission in April 2021 and have since been the subject of intense scrutiny by the Parliament and Council.

In her ‘State of the Union’ speech in September, European Commission president Ursula von der Leyen cited the EU AI Act as one example of “key legislation” her Commission had proposed that she wanted the Parliament and the Council to prioritise the approval of. In a statement, she welcomed the deal they have now struck.

“[This] agreement focuses regulation on identifiable risks, provides legal certainty and opens the way for innovation in trustworthy AI,” von der Leyen said. “By guaranteeing the safety and fundamental rights of people and businesses, the Act will support the human-centric, transparent and responsible development, deployment and take-up of AI in the EU.”

Technology law expert Dr Nils Rauer of Pinsent Masons said: “We will closely follow the final steps towards the actual wording of the AI Act. It is to be expected that in the course of January 2024 we will see first chapters to be finalised. Thereafter, the entire piece will be submitted to the Parliament and the Council for final approval.”

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.