Out-Law / Die wichtigsten Infos des Tages

Out-Law News

EU AI Act: prohibited AI practices ‘should be an initial focus for businesses’


Organisations have been urged to check whether their use or marketing of artificial intelligence (AI) systems will be banned in EU countries once that ban takes effect in early 2025.

Dr Nils Rauer of Pinsent Masons in Frankfurt said publication of the EU AI Act in the Official Journal of the EU (OJEU) last week should spur organisations to undertake the review.

The EU AI Act, dubbed the world’s first AI law, will introduce a new risk-based approach relating to AI systems applied in certain areas, under which some AI practices will be entirely prohibited, as well as a separate regulatory regime for general-purpose AI (GPAI) models. Both concepts are married together in the regulation.

Different parts of the AI Act will take effect at different times, with provisions on prohibited AI practices among the first to take effect – six months after the EU AI Act enters into force, on 2 February 2025.

Article 5 of the EU AI Act lists a whole range of prohibited AI practices, which Rauer said should be an initial focus of businesses seeking to ensure compliance with the new legislation.

One set of prohibited AI practices listed concern the placing on the market, putting into service, or use of AI systems that can subliminally influence, or purposefully manipulate or deceive people. The ban applies to such systems if they have the object or effect of material distorting people’s behaviour and appreciably impairing their ability to make an informed decision, and if this causes them to make a decision they would not have taken otherwise have in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.

Anna-Lena Kempf, AI expert at Pinsent Masons, added. “Influencing people in an undue and inappropriate way without the person actually being aware of such influence is already unlawful in the context of advertising, while the principle of informed consent under data protection law also provides that consent is deemed only to be valid if given free from undue influence. These existing principles are now being extended to rules governing the permissibility of AI tools being deployed.”

Another prohibited practice is the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. Rauer said he expects these provisions to take on “profound importance”.

Rauer said: “This prohibited practice is referred to in recital 43 of the EU AI Act in the context of the creation of a feeling of mass surveillance and gross violations of fundamental rights, including the right to privacy. The question, however, is how the concept of ‘untargeted’ scraping is to be interpreted. There is a risk of targets being set in a way to circumvent this ban.”

Other prohibited AI practices are the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.

“A risk for businesses here is that they are caught by the ban even if they do not intend for their AI systems to be used in the way that is prohibited, since the ban applies to the ‘effect’ of the use and not just the purpose – ‘object’,” Anna-Lena Kempf explained.

The EU AI Act also imposes a ban on using AI to categorise human beings based on their social behaviour or known, inferred or predicted personal or personality characteristics, where the social score then leads to certain prescribed detrimental or unfavourable treatment of those individuals or groups.

Kempf said she expects the provisions in this area to be subject to litigation.

Rauer added: “There is already lots of scoring and classification that takes place – connecting with a targeted peer group first requires some selection of who to try to engage with. In the context of mass communication, AI tools can be highly efficient. Therefore, I expect to see quite some discussion and also lawsuits that will be needed to define the line between permissible and banned practices in this regard.”

The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement agencies will also generally be prohibited under the EU AI Act, although the legislation makes provision for the technology to be deployed for certain listed purposes – such as in the prevention of terrorist attacks – subject to certain safeguards.

For such uses, other than in urgent cases, there must be prior authorisation for such use granted by a judicial authority or an independent administrative authority, which must satisfy itself as to the necessity and proportionality of the proposed use in accordance with listed factors. Use must also be registered in an EU database and notified to relevant market surveillance and data protection authorities.

Rauer said: “It will be important to make sure those who are asked to look into the matter and to decide whether or not to approve the use of AI are adequately familiarised with the technology and its power. What might be a little overkill is the obligation to notify both the relevant market surveillance authority and the national data protection authority. This doubling of administration will cause a lot of additional work.”

Other prohibited AI practices listed under the EU AI Act include use of biometric categorisation systems to categorise people based on their biometric data or to infer sensitive characteristics, such as their political, religious, philosophical beliefs, sexual orientation, or race. Using AI systems to infer emotions of an individual in the areas of workplace and education institutions is also prohibited, subject to where use is intended for medical or safety reasons.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.