Out-Law / Die wichtigsten Infos des Tages

US businesses that supply AI tools to EU-based organisations or use AI tools themselves in the EU market will be subject to the EU AI Act when it takes effect.

US businesses could find themselves subject to the EU AI Act – the world’s first ever law on AI – either as providers of AI systems, deployers of AI systems, or importers or distributors of AI systems. Different requirements arise in respect of each role, while the obligations companies will face will also be determined by the type of AI system in question – and the potential harm it could pose.

Here, we look to help US businesses to navigate what is a complex new regulatory framework – one which is likely to shape regulatory standards pertaining to AI in other parts of the world, like the EU’s General Data Protection Regulation (GDPR) has done in shaping data protection standards globally.

What US businesses need to know about EU AI Act timelines

The EU AI Act is currently in the final stages of the legislative process and is expected to being adopted by EU law makers in the coming weeks.

The legislation takes the form of an EU regulation, meaning it will have direct effect across EU member states when its provisions begin to apply, though certain actions will be required by the governments of individual EU member states to give practical effect to some aspects of the new regime.

The provisions will not begin to apply immediately after the legislation comes into force – instead, the new rules will take effect in stages over a period lasting three years.

Milestone dates include:

  • the prohibition on certain AI systems will take effect six months after entry into force of the EU AI Act;
  • obligations around ‘general purpose AI models’ governance become applicable after 12 months;
  • obligations for high-risk systems begin to take effect after 24 months – though the rules applicable to high-risk AI systems that constitute a product subject to certain existing EU legislation will not take effect until 36 months after entry into force.

The concepts of prohibited AI, general purpose AI models, and high-risk AI are addressed in more detail below.

When US businesses might be subject to the EU AI Act

The EU AI Act introduces a new risk-based system of regulation, which will apply directly to certain business activities across each of the 27 EU member states.

For US companies, the main ways the EU AI Act would apply to them is if they:

  • place on the market or put into service AI systems, or place on the market ‘general purpose AI models’, in the EU – irrespective of where they undertake that activity from;
  • deploy AI systems from a place of establishment or location within the EU;
  • provide or deploy AI systems from outside the EU where the outputs of those systems is used in the EU;
  • import or distribute AI systems within the EU;
  • put AI systems on the EU market or into service in the EU alongside their own product under their own name or trade mark.

Some AI is prohibited

As mentioned above, under the EU AI Act’s risk-based system of regulation, some forms of AI will be completely prohibited.

Among the AI systems that will be banned are biometric categorisation systems that use sensitive characteristics, such as political, religious, philosophical beliefs, sexual orientation, or race.

AI that engages in untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, will also be prohibited, as will AI used for emotion recognition in the workplace and educational institutions; for social scoring based on social behaviour or personal characteristics; and AI systems that manipulate human behaviour to circumvent their free will.

AI used to exploit the vulnerabilities of people, due to their age, disability, social or economic situation, will also be prohibited.

Obligations attaching to ‘high-risk’ AI

The strictest requirements under the EU AI Act apply to AI systems that fall into the category of ‘high-risk’ AI systems. Our separate guide to high-risk AI systems under the EU AI Act can help US businesses understand whether the AI systems they are dealing with constitute a ‘high-risk’ AI system for the purposes of the legislation.

Provider requirements

Where a US business is a provider of high-risk AI systems, they need to meet strict obligations before the AI system is put on the EU market or put into service in the EU.

Providers must:

  • undertake a fundamental rights impact assessment and pass through a conformity assessment process;
  • register their AI system in the public EU database for high-risk Al systems;
  • implement risk management and quality management system requirements;
  • ensure good data governance – such as to mitigate the potential for bias and ensure training data is representative;
  • meet transparency obligations, including providing instructions for use and preparing other technical documentation;
  • ensure that AI outputs – such as audio, image, video or text – are detectable as AI generated;
  • develop the AI system in a way which informs end users that they are interacting with an AI system, subject to exceptions.
  • meet requirements around human oversight – such as ensuring that the way the system works is explainable, that there are auditable logs, and that humans are involved in reviewing outputs before they are put into use;
  • meet obligations around accuracy, robustness and cybersecurity – including in relation to testing and monitoring.
Deployer requirements

US businesses that deploy high-risk AI systems will also face a range of legal obligations.

Deployers must:

  • assign qualified individuals to oversee the AI system, ensuring it is used according to instructions and potential risks are addressed;
  • follow the provider's instructions for using the AI system and ensure they have the necessary data to operate it effectively;
  • actively monitor the AI system's performance, identify potential issues, and make adjustments as needed to maintain accuracy, security, and fairness;
  • complete a fundamental rights impact assessment if they are are bodies governed by public law, private entities providing public services, or if the AI systems in question are used to evaluate credit worthiness/scores or risk assessment/pricing for health and life insurance;
  • provide clear information to users about the AI system's capabilities and limitations, especially for high-risk systems with significant impact;
  • inform individuals about the use of the AI system, with the exception of where AI systems are used to detect, prevent or investigate criminal offences, subject to appropriate safeguards;
  • disclose that 'deep fake' content is AI generated, though some limited exceptions apply;
  • disclose that AI generated text, on matters of public interest, is AI generated, unless there is human review;
  • comply with relevant data privacy regulations regarding the data used by the AI system – including rules designed to minimise the risk of bias and ensure data security;
  • implement measures to mitigate potential risks identified during the risk assessment process. This may involve technical safeguards, training procedures, or adjustments to the AI system itself;
  • maintain records of the AI system's performance, including incidents and corrective actions taken. This is crucial for demonstrating compliance and ensuring accountability.
Distributors and importers

US companies that distribute or import high-risk AI systems must verify that a high-risk AI system complies with the AI Act’s requirements.

Distributors and importers, as well as deployers or other third-party, will be considered to be a provider of a high-risk AI system in certain circumstances. This includes where they:

  • place their name or trade mark on a pre-existing high-risk AI system;
  • make substantial modifications to a high-risk AI system;
  • change the intended purpose of an AI system in a way that makes it high-risk.

Other AI systems

A light-touch regulatory regime applies to providers and deployers of ‘lower risk’ AI systems that do not constitute ‘high-risk’ AI systems. These take the form of transparency obligations and include obligations to inform individuals that they are interacting with an AI system and to disclose where content has been generated by an AI system.

What US providers of general purpose AI models need to do

US businesses that provide ‘general-purpose AI models’ also face separate regulatory responsibilities under the EU AI Act.

A general-purpose AI (GPAI) model is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”.

The new rules for GPAI models were added to the EU AI Act draft at a relatively late stage of the legislative process, following a push by law makers to address advances in the development of large language models (LLMs) that form the basis of some of the world’s most popular generative AI systems.

GPAI models are subject to a classification procedure, to be operated by the European Commission, under the EU AI Act – if the models are designated as being GPAI models with ‘systemic risk’ then they will face more stringent obligations. One of the ways in which GPAIs could be designated as posing ‘systemic risk’ is where its considered to have “high impact capabilities”. An annex to the EU AI Act sets out criteria relevant to informing such an assessment.

GPAI models with system risk will, for example, need to perform model evaluation – including adversarial testing; assess and mitigate systemic risks, report serious incidents, and ensure an adequate level of cybersecurity.

A range of further requirements apply to all GPAI models – including a duty to maintain technical documentation that includes details about the model’s training and testing process, and maintain further information that allows others to integrate their models into their own AI systems.

Transparency obligations pertaining to copyright also apply – providers of GPAI models must put in place a policy to respect EU copyright law and draw up and make publicly available a sufficiently detailed summary about the content used for training their model.  

Some open-source AI models are exempt from the rules applicable to GPAI models, though not in circumstances where they constitute GPAI models with systemic risk.

Severe consequences for non-compliance

As with the GDPR, US businesses that fail to comply with the rules under the EU AI Act will face potentially heavy financial penalties.

Businesses that engage in prohibited AI practices could face fines of up to €35 million, or 7% of their annual global turnover, whichever is highest. Lower maximum penalties apply to other types of infringement, but could still run into potentially hundreds of millions – or even billions – of euros for the largest US businesses.

The maximum fine that could be imposed on providers of GPAI is €15m or 3% of worldwide turnover, whichever is higher.

Co-written by Bella Phillips of Pinsent Masons. Pinsent Masons recently hosted a joint webinar with US firm Nelson Mullins on the far-reaching implications of the EU AI Act. Interested readers can access a recording of the session.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.