AI Act How can companies handle AI responsibly?

A guest post by Erik Dörnenburg* | Translated by AI 3 min Reading Time

Related Vendors

AI is becoming increasingly important in many industries. While consumers appreciate the potential, they distrust companies. Therefore, companies should maintain a transparent, responsible approach to AI. In this way, they convince consumers and prepare for future regulations.

At the moment, customers have to rely on the technology providers and their careful handling of AI.(Image: freely licensed /  Pixabay)
At the moment, customers have to rely on the technology providers and their careful handling of AI.
(Image: freely licensed / Pixabay)

Erik Dörnenburg is CTO Thoughtworks Europe.

After long and tough negotiations, the AI Act has now been passed and is intended to regulate the use of Artificial Intelligence throughout the EU, because AI in general and generative AI in particular creates new risks. These include, for example, untrue and false facts issued by language models, targeted disinformation campaigns with AI-generated images and texts, or unconscious bias due to unbalanced training data.

Consumers and enterprise customers alike must therefore rely on the technology providers and their careful handling of AI. However, trust in these providers has suffered in recent years, as companies have deliberately used technology to the detriment of consumers. For example, Cambridge Analytica influenced consumers via Facebook during the US election campaign based on highly detailed personality profiles. Online tracking and creation of personality profiles have been highly controversial since at least this point. Elsewhere, VW and other car manufacturers caused the diesel scandal. The vehicles showed significantly lower emissions on the test stand than in real operation due to an illegal cut-off device.

And now with Artificial Intelligence - especially generative AI - a technology is widely available that even the developing companies cannot fully understand. A real challenge - but also a great opportunity to present oneself as a responsible partner.

The current AI regulation needs revision.

In view of the aforementioned as well as other risks associated with the use of AI, consumers in Germany desire thoughtful regulation, as an international representative survey commissioned by Thoughtworks showed: with 84 percent, the vast majority of respondents believe AI regulation to be sensible, as their experience is that companies do not regulate themselves adequately. Consumers are therefore pessimistic in this respect. In Germany, more than half of them assume that companies would circumvent even well-crafted legislation. Among other things, consumers fear that their data will not be adequately protected against cyber attacks and breaches (60 percent) and that their data will end up in the hands of third parties (58 percent).

To ensure safe usage and strengthen responsible AI deployment, regulation would need to set uniform standards. However, this is not yet sufficiently taken into account in the laws currently being discussed and already adopted. Legislation lags behind the technology. The rules relate to a state of affairs as of late 2022, which represents a significant gap given the developments of the past year.

In addition, there is still a lack of a clear definition - even in the AI Act - of what falls under the regulated Artificial Intelligence. For medical technology, for example, there are a series of standards that explain under what conditions a technical device falls into medical technology. This would also be an important step for effective AI regulation.

Companies need to build trust.

As long as such questions remain open, companies should gain the trust of consumers in other ways. Because this is worthwhile: In the already mentioned study, 83 percent in Germany agree that companies can become more innovative and offer a better customer experience with the help of GenAI. In order to exploit this potential, companies must act in a timely manner and

1. Document and communicate transparently about the use of AI at an early stage.

2. Create and implement internal standards for responsible use of technology.

This has a double advantage. On one hand, such an approach can more effectively win over consumers. According to 85 percent of respondents in Germany, they prefer companies that stand for transparent and fair use of GenAI. On the other hand, AI solution providers and companies that use these solutions are better prepared for future regulations if they are already adjusting their use based on internal standards.

Trust is a competitive advantage.

Now, with the AI Act, politics has taken a first regulatory step. However, the current guidelines still require improvements to be truly effective. Otherwise, they will be outdated when they come into effect and miss their main purpose, which is to provide clarity and secure usage. Until such an extended or new guideline comes, companies can gain a competitive advantage with transparency and self-regulation. Then it is also easier for them to comply with a new regulation, because they have already laid the foundations.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent