Out-Law / Die wichtigsten Infos des Tages

Providers and deployers of so-called ‘high-risk’ AI systems will be subject to significant regulatory obligations when the EU AI Act takes effect, with enhanced thresholds of diligence, initial risk assessment, and transparency, for example, compared to AI systems not falling into this category.

The technology itself will need to comply with certain requirements – including around risk management, data quality, transparency, human oversight and accuracy – while the businesses providing or deploying that technology will face obligations around registration, quality management, monitoring, record-keeping, and incident reporting.

Additional duties will fall on importers and distributors of high-risk AI systems – and on other businesses that supply systems, tools, services, components, or processes that providers incorporate into their high-risk AI systems, such as to facilitate the training, testing and development of the AI model.

However, before exploring the detail of the extensive requirements they might face – and the changes they would need to apply to their policies, practices, and contracts – organisations need to understand which AI systems will, and which AI systems will not, be regulated as ‘high-risk’ AI systems under the EU AI Act. This has become easier for organisations now that the text negotiated by EU legislators has become public and been endorsed by representatives of the governments of EU member states.

What is an AI system and when is it ‘high-risk’ under the EU AI Act?

An ‘AI system’ is defined under the EU AI Act as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

Providers, deployers, importers, and distributors of systems that meet that description need to consider whether those systems further qualify as ‘high-risk’ AI systems, according to the way the EU AI Act is drafted. There are three ways in which the legislation provides for AI systems to be considered ‘high-risk’:

  • when the AI system is itself a certain type of product;
  • when the AI system is a safety component of a certain type of product;
  • when the AI system meets the description of listed ‘high-risk’ AI systems.

Where an AI system is a product itself

The EU AI Act recognises that AI systems may be an AI system and another form of product at the same time. Where those products are already subject to certain EU regulation, the EU AI Act provides for AI systems constituting such products to be considered ‘high-risk’ AI systems.

More specifically, if an AI system is itself a product and that product is covered by “Union harmonisation legislation” listed in an annex to the EU AI Act, then the AI system will be deemed to be a ‘high-risk’ AI system if it is required to undergo a third-party conformity assessment before it is placed on the market or put into service in the EU under that Union harmonisation legislation.

In short, the provisions could apply to AI systems that constitute medical devices, industrial machinery, toys, aircraft or cars, among other examples.

Where an AI system is intended to be used as a safety component of a product

Similarly, the EU AI Act recognises that AI systems can also be used as a safety component of a product.

As above, where that product is covered by the “Union harmonisation legislation” and is required to undergo a third-party conformity assessment before it is placed on the market or put into service in the EU under that legislation, then the AI system safety component for that product will be automatically considered to be a ‘high-risk’ AI system.

As well as covering AI systems used as safety components for medical devices, industrial machinery, toys, aircraft, and cars as cited above, the provisions could catch AI systems used as safety components for rail infrastructure, lifts, or appliances burning gaseous fuels, among other examples.

Where the AI system meets the description of listed ‘high-risk’ AI systems

AI systems could also be considered ‘high-risk’ AI systems if they meet the description of any of the AI systems listed in a further annex to the EU AI Act, though this will further depend on the extent of harm the systems pose – see below for more information on exceptions.

Broadly, the list covers AI systems used in eight different contexts:

  • biometrics
  • critical infrastructure
  • education
  • employment
  • access to essential services – both public and private
  • law enforcement
  • immigration
  • administration of justice and democratic processes
Biometrics

Some AI systems that involve the processing of biometric data are entirely prohibited under the EU AI Act.

For example, AI-based biometric categorisation systems that categorise people based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation are banned – subject to some limited exceptions in the context of law enforcement.

The use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage is also prohibited.

Further tight restrictions apply to the use of ‘real-time’ AI-based remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, or to AI systems that infer people’s emotions in the workplace or educational setting.

Where AI systems that are used for biometric identification or categorisation, or emotion recognition are not prohibited, they could nevertheless be classed as ‘high-risk’ AI and regulated as such.

In respect of AI-based ‘remote biometric identification systems’, the EU AI Act cites the risk of “biased results” and “discriminatory effects” arising from their operation. As a result, the legislators have decided that unless such a system is intended to be used purely to verify that a person is who they claim to be, it will constitute a ‘high-risk’ AI system. This means part of the matter will turn on the intended purpose for which such systems are to be used.

Biometric systems which are intended to be used solely for the purpose of enabling cybersecurity and personal data protection measures are outside the scope of the rules applicable to ‘high-risk’ AI systems.

Critical infrastructure

AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity could also be considered to be ‘high-risk’ AI systems.

In recitals to the EU AI Act, the legislators justify this classification by stating that the failure or malfunction of such systems could “put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities”.

The legislators have also confirmed that AI systems intended to be used solely for cybersecurity purposes in the context of the safety of critical infrastructure, such as systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres, will not be classed as ‘high-risk’ AI systems.

Education

AI systems intended to be used to determine access or admission or to assign people to education or training institutions are classed as potential high-risk AI systems under the EU AI Act. So are AI systems intended to be used to evaluate learning outcomes, or for the purpose of assessing the appropriate level of education that individuals will receive or will be able to access.

AI systems intended to be used for monitoring and detecting students who are cheating in tests could also constitute high-risk AI systems.

The legislators said it is appropriate to consider such systems as high-risk as “they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood”. They have warned that, when improperly designed and used, such systems “can be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation”.

Employment

AI systems intended to be used in the recruitment or selection process could also be considered to be high-risk AI systems. The legislation lists example uses in this context, including to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.

AI systems intended to be used to make decisions affecting the terms of work-related relationships, promotion and termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics and to monitor and evaluate performance and behaviour of persons in such relationships will also potentially be high-risk AI systems.

The legislators have said that such systems “may appreciably impact future career prospects, livelihoods of these persons and workers’ rights” and that there is a risk that their use could “perpetuate historical patterns of discrimination” or undermine “fundamental rights to data protection and privacy”.

Access to essential services – both public and private

AI systems that are intended to be used to evaluate eligibility for essential public services or benefits – or for the granting, reducing, revoking or reclaiming of such services or benefits – could also be considered high-risk AI systems under the EU AI Act. This covers, for example, AI systems used in determining access to healthcare or housing services or for access to maternity benefits. EU legislators have said such systems should be classed as high-risk as they “may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy”.

AI systems intended to be used to evaluate a person’s creditworthiness or to establish their credit score are also among those that could be regulated as high-risk AI systems, unless those systems are used for detecting financial fraud.

The EU AI Act also lists AI systems intended to be used for risk assessment and pricing of life and health insurance as potential high-risk AI systems too, as well as AI systems used for prioritising how emergency services are deployed.

Law enforcement

The EU AI Act contains a long list of examples cited in which AI systems used in a law enforcement context could be considered ‘high-risk’ AI.

The list includes AI systems intended to be used to assess individuals’ risk of becoming a victim of crime – or of offending or re-offending – as well as those used as polygraphs or similar tools. It also covers AI systems used to evaluate the reliability of evidence in criminal investigations or prosecutions, or for profiling individuals in the course of detection, investigation or prosecution of criminal offences.

Immigration

The list of AI systems that could be considered ‘high-risk’ in the context of migration, asylum and border control management include those intended to be used: to assess individuals’ health or security risk upon entering an EU country; to examine applications for asylum, visa and residence permits; or for the purpose of detecting, recognising or identifying individuals. An AI system used simply to verify an individual’s travel documents will not be considered a ‘high-risk’ AI system.

Administration of justice and democratic processes

AI systems intended to be used by the courts or another dispute resolution body to help them research and interpret facts and the law, or to apply the law to a concrete set of facts, might also be considered ‘high-risk’ AI systems.

Other AI systems intended to be used for influencing the outcome of an election or referendum or people’s voting behaviour could also to be considered high-risk AI systems. However, this does not include AI systems whose outputs individuals are not directly exposed to. This, the legislators have decided, includes “tools used to organise, optimise and structure political campaigns from an administrative and logistic point of view”.

Exceptions and profiling

Many AI systems will not be subject to the regulatory framework the EU AI Act will otherwise establish.

For example, broadbrush exceptions apply to the output from AI systems where it is used exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities; to AI systems and models, including their output, specifically developed and put into service for the sole purpose of scientific research and development; to any research, testing and development activity regarding AI systems or models prior to being placed on the market or put into service; or to deployers who are individuals and use the AI systems in a purely personal non-professional activity.

A further set of exceptions apply specifically to AI systems that otherwise meet the description of listed potential ‘high-risk’ AI systems under the Act.

In those cases, any such AI system “shall not be considered as high risk if they do not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making”.

To elaborate on that derogation, the Act further provides that – subject to one final condition explained below – AI systems will be considered not to pose a significant risk of harm if they meet at least one of four criteria. Those criteria are that:

  • the AI system is intended to perform a narrow procedural task;
  • the AI system is intended to improve the result of a previously completed human activity;
  • the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
  • the AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases [otherwise listed as potential high-risk AI uses in the relevant annex to the Act].

However, the EU AI Act is also clear that any AI system will automatically be considered to be a ‘high-risk’ AI system if the AI system performs profiling of individuals.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.