While on one hand, there are loud complaints about overregulation, on the other hand, the European AI regulation is hailed as a milestone. Our overview clarifies whether you need to deal with the "bureaucratic monster" and where particular hurdles lie.
The core of the AI regulation consists of the extensive provisions for high-risk AI—what falls under this?
Dr. Andreas Lober is a lawyer and partner at Advant Beiten in Frankfurt am Main and leads the IP/IT/Media practice group. Dr. Peggy Müller is a lawyer and partner at Advant Beiten in Frankfurt am Main.
In his keynote speech honoring his Nobel Prize in Physics, Geoffrey Hinton in 2024 enthused about how artificial intelligence (AI) could boost productivity in almost all industries; however, he also warned that the rapid development carries risks. AI could potentially be used to create terrible new viruses and horrifyingly deadly weapons that decide whom to kill on their own, he cautioned.
The EU bodies have addressed the potential risks associated with AI with the necessary attention and, after three years of discussions, finally adopted the AI regulation, also known as the AI Act, last summer. It came into force on August 1, 2024, and its regulations will be applied gradually until August 2027.
Unlike directives, the regulation with its 113 articles applies directly in all EU member states. At its core, it is a special product safety law. The AI regulation is essentially based on four regulatory approaches:
the prohibition of certain AI systems;
the regulations for so-called high-risk AI;
the transparency obligations;
the promotion of innovation.
What is Meant By AI?
The AI regulation initially includes a definition of an AI system, which will be of significant importance in the future. At its core, it refers to software generated using artificial intelligence or machine learning methods. The legislator is attempting to distinguish between AI systems and conventionally programmed software.
Provider Or Operator?
Every company intending to use AI will have to consider whether the AI regulation is relevant for them.
The AI regulation primarily obligates providers and operators of AI systems. The provider is squarely at the center of the regulation, bearing most of the obligations. The provider is the one who develops or has an AI system developed and markets it in their own name, similar to a manufacturer with products. Conversely, the operator is the one who uses an AI system under their own responsibility. This is not the individual user or application of AI but rather the company that provides AI to its employees.
Under certain conditions, an operator of AI can also become a provider, for example, if they label a third-party AI system with their name or trademark or if they make a significant change to it. Therefore, the precise classification into different roles is crucial in determining whether one might be subject to the much more extensive set of obligations for providers.
Where Does the AI Regulation Apply?
Interesting is the concept already known from the General Data Protection Regulation (GDPR) that the AI regulation applies to all companies that place their AI on the market or put it into operation within the EU. This applies regardless of where they are established. Thus, the AI regulation claims a global reach.
What Penalties are Foreseen?
The AI regulation provides a framework for high fines: up to 35 million euros or 7 percent of the worldwide annual turnover are possible.
What is Prohibited?
The AI regulation includes a catalog of eight prohibited practices. This includes, for example, the prohibition of social scoring, the prohibition of AI systems for manipulating or deceiving people, and the prohibition of real-time biometric identification in public spaces. The prescribed prohibitions have been in effect since February 2, 2025.
What is Considered High-Risk AI?
The heart of the AI regulation is the provisions on high-risk AI, which are also exceptionally extensive. The AI regulation focuses on establishing the obligations for providers and operators as well as the elements of risk management. It is important to note that the regulations on risk management apply only to such high-risk AI that poses significant risks. Here, two fundamentally different concepts must be distinguished:
On one hand, an AI system is considered high-risk if it is a product or a safety component of a product that falls under the regulations listed in Annex I of the EU, such as machinery, toys, vehicles of all kinds, and medical devices, and if a conformity assessment is required. Mechanical engineering is thus inherently affected. For embedded AI systems, it is crucial whether the AI system is a safety component. This will be the case, for example, for cameras in highly automated vehicles. In contrast, AI systems used for the optimal exchange of non-safety-related wear parts for cost optimization purposes are unlikely to fall under this category. Therefore, precise delineation of the AI's application area is required on a case-by-case basis.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
In addition, the European legislator has defined certain areas as high-risk AI due to their particular risks to personal rights. In other words, the intended use is decisive here, and an AI is classified as high-risk if it poses risks to the health, safety, or fundamental rights of natural persons. This includes a total of eight areas, such as general and vocational education, personnel management and HR, as well as law enforcement.
The classification of AI as high-risk also depends on a specific risk assessment. For example, AI that aims to improve the results of a previously completed human activity or perform preparatory tasks for an assessment poses no risk. While AI used in predictive maintenance to analyze production facilities and provide insights into potential wear in non-safety-critical areas may not be classified as high-risk, the use of AI for managing the recruitment process is likely to be considered high-risk due to its impact on employment decisions. In general: caution in product safety and personnel areas.
If AI is classified as high-risk, it is subject to extensive requirements: for instance, the provider must establish a risk management system with obligations for data governance and technical documentation. Another cornerstone is registration in the EU database for high-risk AI. In contrast, the obligations imposed on operators of high-risk AI are limited: they must keep records and take technical and organizational measures to ensure the AI is used according to the operating instructions.
Transparency Obligations
Another central element of the AI regulation is the transparency obligations for providers and operators of certain AI systems.
Initially, AI that communicates with natural persons must inform the person concerned that they are interacting with an AI system, unless it is already obvious. This regulation only obligates providers. However, operators of AI are also held accountable, albeit in a more basic manner. For example, they must label so-called Deepfakes and texts that inform the public about matters of public interest. Given the vague terms, there are currently significant uncertainties here. Therefore, we believe it is advisable to label all AI-generated content.
Innovation Promotion Through the Establishment of Real-World Laboratories
Another focus of the AI regulation is to promote and protect innovation and consider the interests of SMEs that offer or use AI. For this purpose, the establishment of so-called AI real labs, or experimental environments, is planned, where AI systems can be tested and further developed under real but simplified regulatory conditions. Currently, the EU Commission is also reviewing whether some reporting obligations represent too high a hurdle for startups and may consider adjusting the regulations.
AI Regulation Manageable for Operators
The technical development of AI is progressing inexorably and does not stop at national borders. In this respect, it is commendable that the EU has prevented a patchwork of national regulations with the AI regulation. Such a patchwork would have posed far greater hurdles for internationally operating companies, as they would have had to comply with 27 national regulations within the EU alone.
The AI regulation contains a wide array of provisions for high-risk AI that may not fully reflect the practical significance of such systems. In practice, most AI systems will not fall into this category. Currently, AI applications in mechanical engineering are mostly used in less risky areas such as manufacturing and process automation, within supply chain management, or in marketing. However, special attention should be paid to AI in the personnel sector, which is classified as high-risk. Therefore, those who use AI in this area must exercise caution.
To ensure the correct categorization of the AI being used, a so-called AI regulation compliance matrix should first be created, systematically analyzing all systems and establishing the resulting obligations.
Since the majority of the regulations primarily obligate AI system providers rather than operators, the overall impact of the AI regulation is likely to be manageable. The regulatory obligations for operators are generally well manageable. However, there are a few points that should be on the to-do list of every mechanical engineering company: all AI processes should be documented in detail and recorded in the previously mentioned matrix. In this context, contracts with suppliers and service providers should also be reviewed for the use of AI and renegotiated if necessary. Furthermore, drafting an AI policy that regulates the use of AI within the company is highly recommended. Additionally, the AI regulation provides for regular training of employees. Finally, processes should be established for the ongoing monitoring and adjustment of any AI systems to ensure they continually meet the established standards. Those who heed this should be able to deploy AI in a legally compliant manner.
Future-Proof Authentication with Universal RFID Readers