The AI Act is a reality. And that's a good thing. However, its implementation remains a tightrope walk. Where does the curtailing of innovation begin? And what if regulation makes regulating impossible?
How far should artificial intelligence be allowed to spread? AI research needs boundaries – but where do you draw them?
Sead Ahmetovic is the CEO and co-founder of We Are Developers.
The European Parliament created legal frameworks for the development and use of artificial intelligence with the AI Act in June last year. AI companies can therefore expect stricter guidelines for their field of business. Such guidelines are important but should not hinder software development. It will likely take at least two to three more years before the AI Act actually comes into effect and is applied. There is enough time to shape the frameworks into concrete regulations that advance the economy, society, and politics, rather than holding them back.
Many cite as an example of the European Union's regulatory zeal the cucumber, whose length and curvature must fall within a legally defined framework to be allowed for transportation and sale. The fact that this regulation is actually about standardized, efficient transport, and thereby cost and emission savings, often seems to be a minor detail in the excitement. However, such rigid laws work well for food, raw materials, or consistently identical products. For software and especially AI-based products, they are more of a hindrance—application areas and development are significantly more dynamic than with cucumbers.
For the AI Act, this means that it should not nail down the status quo of software development. Given the speed at which AI-based solutions continue to evolve and application areas expand, this will hardly be possible, let alone sensible.
Framework conditions for framework conditions
With the AI Act, nothing is set in stone yet—initially, technologies were merely categorized into various classes that will be subject to different regulations moving forward. However, this also means that there are many opportunities for shaping these regulations. It is important to take advantage of these opportunities before the AI Act becomes a bureaucratic monster whose implementation poses a much greater obstacle than the content itself.
Not only for tech giants but also for small and medium-sized enterprises, the AI Act must be transparent and practical. Software, increasingly equipped with AI-based functions, is nowadays used in every business with a growing tendency. Companies rely heavily on it, and regulation must not hinder a functioning economy—especially not in essential areas. The regulations should therefore be clearly defined and made understandable. They also need to be simple to implement and not tied to too many conditions and bureaucracies. For this process, it's best to involve all stakeholders: security and AI experts, developers, ethicists, politicians, and businesses. By uniting the needs of all participants from the beginning, the collaboration leads to the desired outcome and produces safer software, rather than hindering its development.
Accelerate instead of brake
And it is companies where the majority of these developments, the innovation, takes place. They drive digital transformation and provide solutions for today's and future problems. Naturally, they operate in a little or completely unexplored territory. What will come next is hard to say—and therefore also hard to regulate.
It hinders companies when laws are created solely out of the fear that something might go wrong. This can also lead to missing the opportunity to develop positive advancements more quickly and thus strengthen local sites. Additionally, too rigid and comprehensive regulations can prevent finding the right regulations eventually. If companies can no longer determine which developments are heading in a problematic direction, it becomes difficult to identify and regulate them over the long term.
Innovation must be given space to then make adjustments as necessary. There should also be targeted measures for Europe as a location for innovation. Due to overly strict regulations, unfortunately, often those companies that do not want to deal with such bureaucracy anymore tend to relocate. Regulatory straitjackets are particularly an obstacle for small and medium-sized enterprises in their growth, as they cannot and do not want to rely on large legal departments like corporations. However, AI as an important economic factor of the future must be retained in Europe. With the right legislation, Europe could even develop into the go-to market for ethical AI. Open-source frameworks that are collaboratively developed or a Hippocratic Oath for people involved in AI development could lay the first foundations for this. For the details and implementation, however, all stakeholders must again be onboard.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Sensibly dose regulation
The original motivation for introducing a certain degree of regulation of software and particularly AI solutions is fundamentally not unfounded. Specifically, it is intended to prevent the deliberate or unintentional development of potentially harmful solutions at various levels. This is not necessarily only about apocalyptic dystopias often cited by many AI critics.
Rather, the aim is to protect critical areas. This applies to medical applications as well as AI solutions that operate based on sensitive data or software. Thus, it is about security, data protection, and ethics—all issues that are, or should be, particularly important to all of us.
It is still important that the planned laws do not hinder the further development and use of ethical and sensible AI. Even pointless software—except for the developing company—poses no problem as long as it is not harmful and ethically unobjectionable.
The AI Act needs room for further development and the ability to reintegrate developments that turn out well. It must give companies the freedom to invent new things and integrate the new without much effort. Regulation should not hinder innovation but must guide it into safe waters.
This article originally appeared on our partner portal Industry of Things.