AI Act Regulation must not hinder innovation: How the EU can effectively implement the AI Act

A guest commentary by Sead Ahmetovic* | Translated by AI 4 min Reading Time

Related Vendors

The AI Act is a reality. And that's a good thing. However, its implementation remains a tightrope walk. Where does the curtailing of innovation begin? And what if regulation makes regulating impossible?

How far should artificial intelligence be allowed to spread? AI research needs boundaries – but where do you draw them?(Image: freely licensed / Unsplash)
How far should artificial intelligence be allowed to spread? AI research needs boundaries – but where do you draw them?
(Image: freely licensed / Unsplash)

Sead Ahmetovic is the CEO and co-founder of We Are Developers.

The European Parliament created legal frameworks for the development and use of artificial intelligence with the AI Act in June last year. AI companies can therefore expect stricter guidelines for their field of business. Such guidelines are important but should not hinder software development. It will likely take at least two to three more years before the AI Act actually comes into effect and is applied. There is enough time to shape the frameworks into concrete regulations that advance the economy, society, and politics, rather than holding them back.

Many cite as an example of the European Union's regulatory zeal the cucumber, whose length and curvature must fall within a legally defined framework to be allowed for transportation and sale. The fact that this regulation is actually about standardized, efficient transport, and thereby cost and emission savings, often seems to be a minor detail in the excitement. However, such rigid laws work well for food, raw materials, or consistently identical products. For software and especially AI-based products, they are more of a hindrance—application areas and development are significantly more dynamic than with cucumbers.

For the AI Act, this means that it should not nail down the status quo of software development. Given the speed at which AI-based solutions continue to evolve and application areas expand, this will hardly be possible, let alone sensible.

Framework conditions for framework conditions

With the AI Act, nothing is set in stone yet—initially, technologies were merely categorized into various classes that will be subject to different regulations moving forward. However, this also means that there are many opportunities for shaping these regulations. It is important to take advantage of these opportunities before the AI Act becomes a bureaucratic monster whose implementation poses a much greater obstacle than the content itself.

Not only for tech giants but also for small and medium-sized enterprises, the AI Act must be transparent and practical. Software, increasingly equipped with AI-based functions, is nowadays used in every business with a growing tendency. Companies rely heavily on it, and regulation must not hinder a functioning economy—especially not in essential areas. The regulations should therefore be clearly defined and made understandable. They also need to be simple to implement and not tied to too many conditions and bureaucracies. For this process, it's best to involve all stakeholders: security and AI experts, developers, ethicists, politicians, and businesses. By uniting the needs of all participants from the beginning, the collaboration leads to the desired outcome and produces safer software, rather than hindering its development.

Accelerate instead of brake

And it is companies where the majority of these developments, the innovation, takes place. They drive digital transformation and provide solutions for today's and future problems. Naturally, they operate in a little or completely unexplored territory. What will come next is hard to say—and therefore also hard to regulate.

It hinders companies when laws are created solely out of the fear that something might go wrong. This can also lead to missing the opportunity to develop positive advancements more quickly and thus strengthen local sites. Additionally, too rigid and comprehensive regulations can prevent finding the right regulations eventually. If companies can no longer determine which developments are heading in a problematic direction, it becomes difficult to identify and regulate them over the long term.

Innovation must be given space to then make adjustments as necessary. There should also be targeted measures for Europe as a location for innovation. Due to overly strict regulations, unfortunately, often those companies that do not want to deal with such bureaucracy anymore tend to relocate. Regulatory straitjackets are particularly an obstacle for small and medium-sized enterprises in their growth, as they cannot and do not want to rely on large legal departments like corporations. However, AI as an important economic factor of the future must be retained in Europe. With the right legislation, Europe could even develop into the go-to market for ethical AI. Open-source frameworks that are collaboratively developed or a Hippocratic Oath for people involved in AI development could lay the first foundations for this. For the details and implementation, however, all stakeholders must again be onboard.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Sensibly dose regulation

The original motivation for introducing a certain degree of regulation of software and particularly AI solutions is fundamentally not unfounded. Specifically, it is intended to prevent the deliberate or unintentional development of potentially harmful solutions at various levels. This is not necessarily only about apocalyptic dystopias often cited by many AI critics.

Rather, the aim is to protect critical areas. This applies to medical applications as well as AI solutions that operate based on sensitive data or software. Thus, it is about security, data protection, and ethics—all issues that are, or should be, particularly important to all of us.

It is still important that the planned laws do not hinder the further development and use of ethical and sensible AI. Even pointless software—except for the developing company—poses no problem as long as it is not harmful and ethically unobjectionable.

The AI Act needs room for further development and the ability to reintegrate developments that turn out well. It must give companies the freedom to invent new things and integrate the new without much effort. Regulation should not hinder innovation but must guide it into safe waters.

This article originally appeared on our partner portal Industry of Things.