US businesses that supply AI tools to EU-based organisations or use AI tools themselves in the EU market will be subject to the EU AI Act when it takes effect.
US businesses could find themselves subject to the EU AI Act – the world’s first ever law on AI – either as providers of AI systems, deployers of AI systems, or importers or distributors of AI systems. Different requirements arise in respect of each role, while the obligations companies will face will also be determined by the type of AI system in question – and the potential harm it could pose.
Here, we look to help US businesses to navigate what is a complex new regulatory framework – one which is likely to shape regulatory standards pertaining to AI in other parts of the world, like the EU’s General Data Protection Regulation (GDPR) has done in shaping data protection standards globally.
The EU AI Act is currently in the final stages of the legislative process and is expected to being adopted by EU law makers in the coming weeks.
The legislation takes the form of an EU regulation, meaning it will have direct effect across EU member states when its provisions begin to apply, though certain actions will be required by the governments of individual EU member states to give practical effect to some aspects of the new regime.
The provisions will not begin to apply immediately after the legislation comes into force – instead, the new rules will take effect in stages over a period lasting three years.
Milestone dates include:
The concepts of prohibited AI, general purpose AI models, and high-risk AI are addressed in more detail below.
The EU AI Act introduces a new risk-based system of regulation, which will apply directly to certain business activities across each of the 27 EU member states.
For US companies, the main ways the EU AI Act would apply to them is if they:
As mentioned above, under the EU AI Act’s risk-based system of regulation, some forms of AI will be completely prohibited.
Among the AI systems that will be banned are biometric categorisation systems that use sensitive characteristics, such as political, religious, philosophical beliefs, sexual orientation, or race.
AI that engages in untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, will also be prohibited, as will AI used for emotion recognition in the workplace and educational institutions; for social scoring based on social behaviour or personal characteristics; and AI systems that manipulate human behaviour to circumvent their free will.
AI used to exploit the vulnerabilities of people, due to their age, disability, social or economic situation, will also be prohibited.
The strictest requirements under the EU AI Act apply to AI systems that fall into the category of ‘high-risk’ AI systems. Our separate guide to high-risk AI systems under the EU AI Act can help US businesses understand whether the AI systems they are dealing with constitute a ‘high-risk’ AI system for the purposes of the legislation.
Where a US business is a provider of high-risk AI systems, they need to meet strict obligations before the AI system is put on the EU market or put into service in the EU.
Providers must:
US businesses that deploy high-risk AI systems will also face a range of legal obligations.
Deployers must:
US companies that distribute or import high-risk AI systems must verify that a high-risk AI system complies with the AI Act’s requirements.
Distributors and importers, as well as deployers or other third-party, will be considered to be a provider of a high-risk AI system in certain circumstances. This includes where they:
A light-touch regulatory regime applies to providers and deployers of ‘lower risk’ AI systems that do not constitute ‘high-risk’ AI systems. These take the form of transparency obligations and include obligations to inform individuals that they are interacting with an AI system and to disclose where content has been generated by an AI system.
US businesses that provide ‘general-purpose AI models’ also face separate regulatory responsibilities under the EU AI Act.
A general-purpose AI (GPAI) model is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”.
The new rules for GPAI models were added to the EU AI Act draft at a relatively late stage of the legislative process, following a push by law makers to address advances in the development of large language models (LLMs) that form the basis of some of the world’s most popular generative AI systems.
GPAI models are subject to a classification procedure, to be operated by the European Commission, under the EU AI Act – if the models are designated as being GPAI models with ‘systemic risk’ then they will face more stringent obligations. One of the ways in which GPAIs could be designated as posing ‘systemic risk’ is where its considered to have “high impact capabilities”. An annex to the EU AI Act sets out criteria relevant to informing such an assessment.
GPAI models with system risk will, for example, need to perform model evaluation – including adversarial testing; assess and mitigate systemic risks, report serious incidents, and ensure an adequate level of cybersecurity.
A range of further requirements apply to all GPAI models – including a duty to maintain technical documentation that includes details about the model’s training and testing process, and maintain further information that allows others to integrate their models into their own AI systems.
Transparency obligations pertaining to copyright also apply – providers of GPAI models must put in place a policy to respect EU copyright law and draw up and make publicly available a sufficiently detailed summary about the content used for training their model.
Some open-source AI models are exempt from the rules applicable to GPAI models, though not in circumstances where they constitute GPAI models with systemic risk.
As with the GDPR, US businesses that fail to comply with the rules under the EU AI Act will face potentially heavy financial penalties.
Businesses that engage in prohibited AI practices could face fines of up to €35 million, or 7% of their annual global turnover, whichever is highest. Lower maximum penalties apply to other types of infringement, but could still run into potentially hundreds of millions – or even billions – of euros for the largest US businesses.
The maximum fine that could be imposed on providers of GPAI is €15m or 3% of worldwide turnover, whichever is higher.
Co-written by Bella Phillips of Pinsent Masons. Pinsent Masons recently hosted a joint webinar with US firm Nelson Mullins on the far-reaching implications of the EU AI Act. Interested readers can access a recording of the session.