AI and ethics Why there is ethically good and bad AI

A guest article by Joe Novak* | Translated by AI 4 min Reading Time

Related Vendors

In the rapidly growing AI landscape, the distinction between good and bad AI is becoming increasingly important - not only in technological terms, but also in ethical and legal terms. Responsible AI development and human oversight are crucial.

Generative AI poses specific risks such as toxicity, polarity, discrimination, over-reliance on AI, disinformation, privacy, model security and copyright infringement.(Image: freely licensed /  Pixabay)
Generative AI poses specific risks such as toxicity, polarity, discrimination, over-reliance on AI, disinformation, privacy, model security and copyright infringement.
(Image: freely licensed / Pixabay)

Joe Novak is Chief Innovation Officer at Spitch.

The more government bodies such as the EU Parliament adopt comprehensive legal frameworks for the use of AI, the more important it becomes for companies to prioritize data protection, security, regulatory compliance and responsible development practices when introducing AI.

The latest EU legislation on AI, the EU AI Act, is well placed to set a new global standard for the responsible adoption of AI. The EU AI Act challenges companies and organizations around the world to distinguish between good AI - which focuses on privacy and security - and bad AI - which focuses on the exploitation of data. The key here is to find a balance between innovation and the ethics of progress.

Good and bad AI as defined by the EU's AI Act

The adoption of the EU AI law by the European Parliament is an important milestone in global efforts to ensure the safe and responsible development of AI technologies. The aim of the law is to protect citizens' rights, democracy and environmental sustainability from the dangers posed by high-risk AI applications. The legislation sets out obligations tailored to the level of risk and impact of AI systems, with the aim of positioning Europe as a global leader in responsible AI innovation.

The law applies to providers and developers of AI systems that are marketed or used in the EU, regardless of whether these providers or developers are based in the EU or in another country - such as Switzerland, as in the case of Spitch. It follows a risk-based approach for the classification of AI systems in four levels, which correspond to the sensitivity of the data concerned and the respective AI use case or application.

The legislation introduces strict bans on AI applications that are deemed harmful, for example biometric categorization systems, untargeted facial image scraping, emotion recognition in the workplace and at school, social scoring and predictive policing based solely on profiling. It also sets out specific conditions for the use of biometric identification in law enforcement and calls for transparency and accuracy for high-risk AI systems.

Frameworks for AI are gaining in importance

As companies grapple with these new challenges, frameworks such as Reliable, Accountable, Fair and Transparent (Raft for short) are being developed by AI platform Data Iku to create a comprehensive business and research and development roadmap for building AI systems responsibly, addressing potential risks and anticipating future regulatory developments.

The Raft framework highlights the critical need for organizations to consider the role of accountability and governance in the use of AI systems, particularly in the context of the rapid development and adoption of generative AI and large-scale language models (LLMs for short). It emphasizes that the deployment and governance of AI must take into account socio-technical dynamics, legal considerations and emerging issues such as data protection and copyright infringement. The aim of this proactive approach is to standardize the emerging consensus on this technology across the board and provide a forward-looking approach for companies and research institutions to prepare, even if the impact of future legislation is still uncertain.

Generative AI poses specific risks such as toxicity, polarity, discrimination, overreliance on AI, disinformation, privacy, model security and copyright infringement. These risks can manifest themselves in different types of AI technology and vary depending on the use case.

People must continue to call the shots

With this in mind, Spitch is keen to consider both the need for and the impact of generative AI tools when integrating them into existing services or using them in-house. When integrating these tools into its own contact center solutions, such as for quality management, the focus is on responsibly improving the customer experience, reducing stress and streamlining customer interactions - not on AI for AI's sake. Humans must continue to call the shots here. When introducing AI responsibly, companies must keep the target group for the results of AI models in mind - whether business customers, consumers or private individuals. In addition, strict attention must be paid to promoting key criteria such as reliability and transparency, as this is what characterizes good AI.

Assessment of the potential impact

The potential risks of AI systems should be assessed on the basis of their direct and indirect impact on individuals and groups. It does not matter whether these impacts occur immediately or unfold over time. For developers of AI systems, this means that their solutions should not intentionally or indirectly systematically spy on data, promote bias, unnecessarily polarize or generate misinformation. This applies both to interactions with individuals and in the public sphere. It is up to us to develop good AI with this in mind!

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent