Open letter against unrestrained AI Headwind for ChatGPT & Co: Moratorium on AI development demanded

From Michael Eckstein

Related Vendors

Has the self-created monster spiraled out of control? Are we currently witnessing the birth of Skynet from Schwarzenegger's Terminator movies? According to the open letter "Pause Giant AI Experiments: An Open Letter", in which experts demand a halt to AI development, we are at risk of losing control over our civilization. Indeed, Pandora's box has long been opened.

Gefahr für die menschliche Zivilisation: In einem offenen Brief warnen Tech-Größen und KI-Experten vor den Gefahren ungehemmter KI-Entwicklungen. (Bild:  frei lizenziert /  Pixabay)
Gefahr für die menschliche Zivilisation: In einem offenen Brief warnen Tech-Größen und KI-Experten vor den Gefahren ungehemmter KI-Entwicklungen.
(Bild: frei lizenziert / Pixabay)

What do Elon Musk (CEO of Tesla, Twitter, Space-X), Steve Wozniak (Apple co-founder), Yoshua Bengio (founder and scientific director of Mila, Turing Award winner, and professor at the University of Montreal), Stuart Russell (Berkeley, professor of computer science, director of the Center for Intelligent Systems, and co-author of the standard textbook "Artificial Intelligence: a Modern Approach"), Sean O'Heigeartaigh (Executive Director, Cambridge Centre for the Study of Existential Risk) – and numerous other high-ranking representatives of the educational system and tech industry have in common?

Correct: They are warning of the potentially incalculable consequences of general artificial intelligence – and are calling for an immediate moratorium on further AI development in an open letter. This is to initially last for at least six months. In the meantime, well over a thousand people have digitally signed the letter.

AI black boxes that no one can control anymore

The signatories fear that AI will quickly become so proficient at self-optimization that "no one – not even its creators – can understand, predict, or reliably control it." The global race in AI development has already spiraled out of control. This aligns with the ABC News interview broadcasted just over a week ago, in which the OpenAI founder warned about his own creation ChatGPT.

Undoubtedly, AI can perform many tasks much better than humans ever could within a clearly defined framework. Just consider the evaluation of huge amounts of data, for example in medical diagnostics or in "Predictive Maintenance" applications for machine management. However, there are also enormous risks associated with AI technology.

"Especially democracies are at risk"

Thus, the authors of the letter fear that "AI systems with human-level intelligence can pose profound risks to society and humanity." Extensive research supports this and is recognized by leading AI labs. Advanced AI could bring about a profound change in the history of life on Earth and should be planned and managed with due care and resources. "Unfortunately, such planning and management are not taking place."

The critics explicitly point out that the new technology could spread propaganda and hate speech on an unprecedented scale. They fear negative impacts on the world of work and are concerned that even high-quality jobs could disappear: "Should we automate all jobs, even the fulfilling ones? Should we develop non-human intelligences that could eventually outnumber us, outsmart us, make us redundant, and replace us? Should we risk losing control over our civilization?" Such decisions should not be delegated to unelected technology leaders.

First establish clear rules for AI development

Therefore, the initiators of the AI development halt are demanding that an ethical framework with clear boundaries be created first, which must not be crossed in AI development. Powerful AI systems should only be allowed to be developed if their "impact is positive and the risks manageable." The moratorium of at least six months explicitly refers to the training of AI systems that are more powerful than GPT-4. In this half-year, ongoing developments are to be reviewed by external experts, and in addition, developers are to jointly design and implement safety protocols.

These protocols should ensure that systems adhering to them are unequivocally safe. This does not mean a general pause in AI development, but rather a move away from the dangerous race towards ever larger, ultimately unpredictable black-box models with emergent capabilities. "AI research and development should focus on making today's powerful, state-of-the-art systems more accurate, safer, more interpretable, transparent, robust, better calibrated, more trustworthy, and more loyal."

In parallel, AI developers must work with political decision-makers "to drastically accelerate the development of robust AI governance systems." These should at least include: new and capable regulatory agencies specifically dealing with AI; the monitoring and tracking of high-performance AI systems and large pools of computing capacities; origin and watermarking systems that help distinguish between real and synthetic data and track model leaks; a robust audit and certification system; liability for damages caused by AI; substantial public funding for technical AI safety research; and well-equipped institutions to deal with the dramatic economic and political upheavals that AI will cause – "especially for democracy."

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Problem: Globally harmonized rules that are also enforced

The problem with this demand for more control is evident: Firstly, such regulations would need to apply worldwide to all actors, and secondly, compliance would need to be monitored and documented. How is this supposed to work? And who is going to do it? China's Xi is certainly not going to allow U.S. AI controllers into the country – and vice versa. It is becoming increasingly clear that technological development is now light-years ahead of regulatory authorities.

Slowing down the so far unrestrained AI hype now seems virtually hopeless. The situation is comparable to scattering the feathers of a pillow to the winds and then trying to recollect all the feathers. New AI applications are popping up almost weekly around the world, with companies vying to outdo each other with their announcements.

Take a break now – it has worked with other high-risk technologies too

Nevertheless, one should not give up: With other technologies that have potentially catastrophic effects on society, society has always taken a break - for example, with human cloning, with changes to the human germline, with gain-of-function research, and with eugenics. The authors advocate for it: "We can do the same here!"

Humanity could experience a flourishing future with AI – "after we have succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the fruits, develop these systems for the clear benefit of all, and give society the chance to adapt."