Businesses developing and procuring AI tools have been urged to build their understanding of how the new EU AI Act will apply to them, and the risk management obligations they may be subject to, after the legislation was written into EU law on Friday.
Frankfurt-based Nils Rauer of Pinsent Masons, an expert in AI regulation, said the EU AI Act is a “highly impactful piece of legislation” that businesses need to respond to now, ahead of core provisions taking effect in the months and years ahead.
Proposals for an EU AI Act, dubbed the world’s first AI law, were published by the European Commission in 2021 and subsequently were subject to intense scrutiny by EU law makers within the European Parliament and Council of Ministers. The AI Act cleared the legislative process in May, ahead of the European Parliament elections, and has now been published in the Official Journal of the EU. It will enter into force on 1 August 2024, though most of the provisions will not take immediate effect.
Rauer said: “The AI Act has undergone substantial changes since it was first proposed by the European Commission, both in terms of language and conceptually. The legislator has married together two conceptual approaches – a risk-based approach relating to AI systems applied in certain areas, and a separate regulatory regime for general-purpose AI (GPAI) models. The latter need to comply with stricter obligations if they involve a systemic risk – something which is not determined in relation to the area of application, but rather on the basis of how powerful the model is.”
“The AI Act has clear extra-territorial effect, meaning companies in the US, UK, or Asia, among others globally, need to consider the new law,” he said.
Under the risk-based approach to AI regulation under the AI Act, some types and uses of AI will be prohibited altogether, while the strictest regulatory requirements remaining are reserved for AI systems that are classed as ‘high-risk’ AI systems. Pinsent Masons has developed a guide to help businesses understand whether the AI systems they develop or use constitute high-risk AI.
Providers and deployers of ‘high-risk’ systems face regulatory obligations, including in respect of registration, quality management, monitoring, record-keeping, and incident reporting, while such systems themselves will need to comply with certain requirements – including around risk management, data quality, transparency, human oversight and accuracy. Additional duties will fall on importers and distributors of high-risk AI systems – and on other businesses that supply systems, tools, services, components, or processes that providers incorporate into their high-risk AI systems, such as to facilitate the training, testing and development of the AI model.
Maureen Daly
Partner
Businesses need to be mindful of the new obligations imposed under the AI Act because the fines imposed for non-compliance are higher than those under the GDPR
In respect of GPAI models, a classification procedure will apply, with GPAI models designated as posing ‘systemic risk’ facing more stringent obligations. Such models will, for example, need to perform model evaluation – including adversarial testing; assess and mitigate systemic risks, report serious incidents, and ensure an adequate level of cybersecurity. All GPAI models will need to maintain technical documentation that includes details about the model’s training and testing process, maintain further information that allows others to integrate their models into their own AI systems, as well as meet transparency obligations pertaining to the rights of copyright holders.
Dublin-based Maureen Daly of Pinsent Masons said: “Businesses need to be mindful of the new obligations imposed under the AI Act because the fines imposed for non-compliance are higher than those under the GDPR.”
Fines of up to €35 million or 7 % of the company’s total global annual turnover for the preceding financial year can be imposed for breaches of the rules on prohibited AI practices; €15 million or 3% of the company’s turnover for other violations; and €7.5 million or 1% of the company’s turnover for the supply of incorrect information to competent authorities.
Most of the provisions of the AI Act will apply as of August 2026, however Chapters I and II of the regulation apply from February 2025. This notably includes the articles on prohibited AI practices. Some other provisions will apply from August 2025 onwards, including the notification obligations pertaining to high-risk AI as well as the classification of GPAI models.
Wouter Seinen
Partner, Head of Office, Amsterdam
The EU AI Act is relevant for all companies that use software automation in the course of their business processes
Rauer said: “Companies need to turn to this piece of legislation now, as there are risk assessment obligations arising that apply before the new regime takes full effect. Taking into account the time it takes to develop a product involving AI and the overall development and life circle of products, businesses need to know what to comply with when putting on the market a product in 2026 or later.”
Amsterdam-based Wouter Seinen of Pinsent Masons said: "The AI Act is much more about software as we have known it for years than about the generative AI systems and sophisticated LLMs that are on the rise right now. As a result, it is relevant for all companies that use software automation in the course of their business processes."
"Businesses will welcome the fact that the AI Act provides much more concrete guidance on the question how to comply than the GDPR did in 2016, in large part thanks to the reference to official standards. By incorporating such standards in their compliance framework, organisations can drive both the quality of their processes and mitigate regulatory exposure," Seinen added.
Andre Walter, also of Pinsent Masons in Amsterdam, said: “With the AI Act coming into force, the compliance burden for data processing, whether personal data or not, will increase significantly. Having a robust compliance framework has never been such an explicit legal requirement as it is now. This is also an important signal to all other data protection compliance officers to mature their frameworks."