A New Law Governing Artificial Intelligence

Author: Joachim Geiger

Oct 16, 2024 Cyber Security / Artificial Intelligence

DEKRA recently became part of an association of companies and organizations that are working with the EU Commission to implement new AI regulations. But what is this set of rules all about? We spoke to DEKRA expert Elija Leib to find out.

DEKRA is now taking its commitment to responsible AI development one step further: In excellent company of around 110 stakeholders from industries ranging from automotive, aviation, IT to telecommunications, the expert organization signed the Artificial Intelligence (AI) Pact in Brussels on 25 September 2024. The 'AI Pact' is an EU Commission initiative aimed at paving the way for the best possible implementation of the AI Act, which came into force at the beginning of August in the EU. The core of the initiative is based on a voluntary commitment by companies and organizations that, among other things, provides for the development and implementation of an AI governance strategy in order to consistently work towards future compliance with the AI Act.
“This voluntary commitment reflects DEKRA's long-standing dedication to shaping a future in which AI contributes to social progress while minimizing risks,” explains Elija Leib, who has closely followed the development of the new AI law for DEKRA as Senior Policy Manager in Brussels. Naturally, the question is why the new regulations are causing such a stir in the first place. In other words, what's the story behind the AI Act, or more precisely, the “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence”?

Unacceptable risks - a red flag for social scoring by AI systems

“The AI Act is the world's first comprehensive regulation of AI and a milestone for Europe. In principle, it deals with the question of which AIs have the potential to harm people or impair their fundamental rights,” says Elija Leib, summing up the basic idea behind the regulation. However, the law does not put all AIs in the same pot - only AI systems with a high level of risk to the general public are to be subject to strict regulations or banned. Most current AI systems therefore do not fall into one of these two categories. The regulation will show a red flag to AI systems that could cause physical or psychological harm to people or exploit a group of people's weakness or vulnerability due to age or physical or mental disability. The index also includes AI systems for assessing the trustworthiness of people based on social behavior, personal characteristics or personality traits (social scoring). However, the central focus of the AI Regulation is on AI systems with a high level of risk.
Categories of AI - minimal and contained levels of risk
Beyond unacceptable and high-risk AI, the EU regulation recognizes other categories of AI. Comparatively unproblematic would be AI in a spam filter, in a video game or in a system for the predictive maintenance of machines. Here and there, a minimal level of risk would probably be attested in the risk assessment. In contrast, AI systems that interact with humans generally pose a moderate level of risk. This is the case, for example, with chatbots on the internet that are used for service or help requests. As part of their transparency obligations, developers and operators would then have to ensure that customers are aware that they are communicating with an AI.
“High-risk AI systems are enormously complex and therefore come with levels of risk that are difficult to predict,” explains Elija Leib. Such systems are currently being used in the administration and operation of critical infrastructure - including road traffic and the supply of water, gas, heat and electricity. However, HR service providers and banks are also likely to have to classify their AI-based systems according to their level of risk in the future. A system for selecting applicants, for example, would have to ensure that the training data does not reflect social prejudices by systematically disadvantaging applicants with certain names or from certain regions. Ultimately, developers and operators of high-risk AI are subject to extensive requirements, with the former bearing the heavier burden. The specifications for the use of AI include comprehensive requirements for technical documentation, data and risk management as well as maintenance and monitoring.

High-risk AI - DEKRA is excellently equipped to assess conformity

Before an AI system can be placed on the market or put into operation, there is an important formality on the agenda: the AI Act requires conformity assessment by a certified body to ensure a high level of trustworthiness. “DEKRA is perfectly equipped to carry out this task in terms of independence, expertise and Cyber Security,” says Elija Leib. However, it is likely to be some time before the inspection organization's AI experts can hit the ground running. The EU member states now have to clarify by August 2025 who is responsible for notifying the certification bodies in their countries. “The timetable is pretty tight because the rules for high-risk systems will already apply in August 2026,” reports Elija Leib.
And what does the new law say about popular AI language models (LLMs) such as ChatGPT (OpenAI) and Gemini (Google), which can process audio and text data and generate text and images? “The regulation classifies such systems as general purpose artificial intelligence (GPAI) AI models,” explains Elija Leib. This comparatively new technology plays a special role in the law because it emerged in the AI landscape at a time when work on the AI law was already in full swing. The legislator apparently considers the risks of generative AI models to be manageable - even if these models are particularly powerful and pose a systemic risk. Developers can therefore comply with the legal requirements by means of appropriate self-regulation alone.

AI language models - the legislator waives the obligation for third parties to carry out checks

For most modern GPAIs, this would include, for example, a list of the use of copyrighted training data and the labeling of generated content. In the case of a systemic level of risk, there would also be obligations relating to the monitoring of serious incidents and a model evaluation. DEKRA expert Elija Leib is not entirely satisfied with this regulation. He criticizes the fact that audit organizations are entirely left out of the GPAI audit. This waiver of control could give rise to a new risk, at least in this category of AI - namely the temptation to quickly turn a blind eye to relevant issues. “We would have preferred the testing organizations to be further involved here,” says Elija Leib.
Three questions to Elija Leib, DEKRA Senior Policy Officer in Brussels

What challenges will companies face with the AI law?

For most companies using AI, the adjustments are likely to be relatively minor. The most important thing for companies now would be to clarify in advance whether their own system is covered by the AI Act or not. Implementing the regulation will not be a walk in the park, especially for companies working with high-risk AI. Far-reaching requirements come into play here.

What role does AI play in terms of product safety?

The AI Act states that in future, all AI systems that are safety components of products subject to mandatory testing must themselves undergo testing. In fact, the AI regulation is assigned to the sphere of product safety - there are a number of links to relevant regulations here. Take, for example, the Machinery Regulation, which is intended to ensure safety in industrial applications: An AI that controls a high-speed elevator would be just as much a matter for inspectors as the elevator itself.

How does the AI regulation deal with liability issues? What advice do you have for companies?

The EU Commission has adapted the liability guidelines to the new technologies. The focus is primarily on the developers of AI systems. For small and medium-sized companies in particular, it can therefore make sense to go beyond the legal requirements when adapting their business to the new regulations - in order to avoid liability in difficult cases. In any case, it would be a good idea to seek advice on liability issues from an independent testing organization at an early stage.