EU Artificial Intelligence Act

Does your company market or use AI systems?

In March 2024, the EU Parliament adopted a regulation that governs the public and commercial use of such systems. The regulations, also known as the AI Act or EU AI Act, categorise AI systems into four risk classes. The higher the risk, the stricter the requirements. The regulation is expected to come into force in the middle of the year, with individual regulations coming into force six months later.

The EU AI Act is intended to protect people, institutions and the environment from the risks of state or commercial AI use and ensure that AI systems operate in a safe, transparent, egalitarian and environmentally friendly manner.

The law categorises the risks posed by such systems into four classes:

Class 1: Unacceptable risk

The regulation prohibits unreasonably or unacceptably risky AI apps. This includes systems that implicitly violate fundamental rights, for example by exploiting human weaknesses to manipulate behaviour, selecting people according to biometric or social characteristics (social scoring), or monitoring or suspecting people without cause. This also extends to the unauthorised, indiscriminate reading and collection of faces from online images or surveillance cameras as well as the machine interpretation of human emotions in the workplace or in educational institutions. Only official law enforcement officers enjoy – strictly regulated – special powers here.

Class 2: High-risk systems

High-risk AI systems are those whose use does not automatically harm health, safety, fundamental rights, the environment, democracy or the rule of law, but which are susceptible to deliberate or negligent misuse. They are typically used to manage critical infrastructure, material or non-material resources or personnel.

Show more

Such systems will be subject to strict conditions in future. Take lending, for example: according to the AI Act, banks are not allowed to let the machine alone decide on the customer’s creditworthiness. A human must check the score calculated by the machine and be responsible for approving or rejecting the loan.

Manufacturers of such high-risk systems must test them thoroughly before launching them on the market, importers and downstream retailers must ensure that the systems comply with the law, and users must monitor their use. According to the law, the final decision-making and supervisory authority remains the human being. The regulation gives the addressees of the decisions of such systems rights of objection, information and appeal. High-risk systems include in particular

  • AI systems as components of products subject to the EU Product Safety Regulation
  • AI systems from one of the following eight categories:
    • Recognition and classification based on biometric features
    • Operation of critical infrastructure
    • Education and vocational training
    • Labour, personnel management, access to self-employment
    • Basic provision of public or private services
    • Law enforcement
    • Migration, asylum, border control
    • Legal advice

 

Class 3: Transparency risk

The EU legislator categorises AI as moderately risky, or at least non-transparent, if it does not conflict with fundamental rights but leaves users in the dark about the nature and sources of the service. This applies to chatbots, but above all to so-called generative AI, i.e. programmes that generate artificial texts, images or videos (e.g. deepfakes). According to the law, such apps must identify themselves as machines, label their products as artefacts, document training data and its sources, protect the copyrights of the sources and prevent the generation of illegal content.

Class 4: Low risk

No restrictions apply to simple AI systems such as spam filters or recommendation services.

Leuchtturm

Quality assurance for AI applications?

Lighthouz AI

Addressees, obligations and sanctions

The use of systems in risk class 1 must be terminated just six months after the ordinance comes into force, i.e. probably by the end of the year. Further provisions will apply after 12 and 24 months respectively, and after 36 months the AI Act will apply in full. In addition to users, the target group includes manufacturers, importers and downstream dealers. Violations of the regulation can result in fines of between one and seven per cent of annual global turnover, depending on their severity. Companies should therefore immediately check what obligations they have under the law and prepare for them in good time.

Betreiber KI

Operator

Natural or legal person (public authority or company) that develops an AI system or has one developed in order to sell or operate it in its own name or under its own brand, either for a fee or free of charge.

Einführer KI

Importer

A natural or legal person resident or established in the European Union who sells or operates in the Union an AI system bearing the name or trade mark of a manufacturer resident or established outside the EU. Obligations:

  • check that the manufacturer has assessed the EU conformity of its AI product and that it bears a CE mark
  • Check whether the manufacturer’s technical documentation is available
  • enter their own (trade) names in the AI system
  • submit proof of conformity to authorities on request
Händler KI

Distributor

Natural or legal person who distributes an AI system manufactured or imported by a third party within the EU without changing its characteristics. Obligations:

  • Check for CE mark and documentation
  • If it is suspected that an AI system is not (or no longer) EU-compliant: suspend distribution, report to authorities
  • Submit proof of conformity to authorities on request
Nutzer KI

User

Natural or legal person who uses an AI system on their own initiative for professional, commercial or official purposes. Obligations:

  • compliant use of the AI system
  • Monitoring of the system in accordance with the manufacturer’s instructions
  • In the event of suspected malfunction or risk: suspend use, report to dealer or manufacturer
  • Retention of the logs generated by the system

My chatbot is hallucinating!

With AI, the accent is on the C, not the I – the machine doesn’t really know what it’s doing. What has so far been sometimes curious, often annoying, embarrassing, sometimes harmful, will become one thing above all else according to the will of EU legislators: illegal. Systems that play with human dignity must be scrapped quickly. AI applications that are categorised as responsible but risky must be constantly monitored.

The US start-up Lighthouz has set itself the task of uncovering the malfunctions of such systems. Together with its American colleagues, Consileon supports you with the legally required tests. Lighthouz AI tests AI systems for:

  • Coherence: how does the AI perform in longer dialogues (multi-turn conversation) with humans?
  • Reliability: Does the AI adhere to programmed rules or does it tend to hallucinate?
  • Security: Is the AI susceptible to so-called prompt injection?
  • Privacy: Are the operators of the AI protected against data leaks?

Essentially, Lighthouz AI generative AI tests how closely the machine-generated answer to a complex question matches the user’s expectations. To do this, Lighthouz AI evaluates the result using syntactic and semantic metrics.

If you would like to know exactly how Lighthouz AI’s testing process works, what you need and how it can help you comply with the AI Regulation, our AI experts look forward to hearing from you.

Leuchtturm

Quality assurance for AI applications?

Lighthouz AI

Does the AI Regulation affect your company?

Consileon helps to develop, market or use AI applications in a legally compliant manner and to adapt legacy systems to the new legal situation.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

More on this topic

Further information on the AI Regulation can be found at the European Parliament. A video by MDR provides a brief introduction.

 

This migh interest you

Information security at the highest level

With TISAX, the ENX Association offers a standardised platform for the mutual acceptance of information security assessments in the automotive industry on behalf of the German Association of the Automotive Industry (VDA).