Open letter against unbridled AI Headwinds for ChatGPT & Co: Moratorium on AI development called for

Von Michael Eckstein 4 min Reading Time

Related Vendors

Is the self-created monster getting out of control? Are we currently witnessing the birth of Skynet from Schwarzenegger's Terminator movies? According to the open letter "Pause Giant AI Experiments: An Open Letter," in which experts demand a halt to AI development, we are currently risking the loss of control over our civilization. Indeed, Pandora's box has long been opened.

Danger to human civilization: In an open letter, tech giants and AI experts warn of the dangers of unbridled AI developments.(Image:  freely licensed /  Pixabay)
Danger to human civilization: In an open letter, tech giants and AI experts warn of the dangers of unbridled AI developments.
(Image: freely licensed / Pixabay)

Elon Musk, Steve Wozniak, Yoshua Bengio, Stuart Russell, Sean O'Heigeartaigh, and several other high-ranking representatives of education and the tech industry have all expressed concern about the development and use of artificial intelligence and have called for responsible innovation and regulation in this field.

Correct: They warn of the potentially incalculable consequences of general artificial intelligence – and demand in an open letter an immediate moratorium on the further development of AI. This is to initially last for at least six months. By now, well over a thousand people have digitally signed the letter.

AI black boxes that no one can control anymore

The signatories fear that AI will quickly become so good at optimizing itself that "no one – not even its creators – can understand, predict, or reliably control it." The global race in AI development has already spiraled out of control. This corresponds with the ABC News interview broadcasted just over a week ago, in which the OpenAI founder warned about his own creation, ChatGPT.

Undoubtedly, AI can perform many tasks much better within a clearly defined framework than humans ever could. Just consider the evaluation of massive amounts of data, such as in medical diagnostics or "Predictive Maintenance" applications for machine management. However, there are also enormous risks inherent in AI technology.

"Especially democracies are at risk"

The authors of the letter fear that "AI systems with intelligence equal to human intelligence can present profound risks to society and humanity." Extensive research supports this and is recognized by leading AI labs. Advanced AI could bring about a profound change in the history of life on Earth and should be planned and managed with due care and resources. "Unfortunately, such planning and management are not happening."

The critics explicitly point out that the new technology could spread propaganda and hate speech to an unprecedented extent. They fear negative impacts on the workforce and worry that even high-quality jobs may be eliminated: "Should we automate all jobs, including the fulfilling ones? Should we develop non-human intelligences that might eventually outnumber, outsmart, make redundant, and replace us? Should we risk losing control over our civilization?" Such decisions should not be delegated to unelected technology leaders.

First establish clear rules for AI development

Therefore, the initiators of the AI development halt are calling for the creation of an ethical framework with clear boundaries that must not be crossed in AI development. Powerful AI systems should only be allowed to be developed if their "impact is positive and the risks are manageable." The moratorium of at least six months explicitly refers to the training of AI systems that are more powerful than GPT-4. During this half-year, ongoing developments should be reviewed by external experts, and developers should jointly design and implement safety protocols.

These protocols should ensure that systems that comply with them are unequivocally safe. This doesn't mean a general pause in AI development, but rather a move away from the dangerous race towards ever-larger, ultimately unpredictable black-box models with emergent capabilities. "AI research and development should focus on making today's powerful, state-of-the-art systems more precise, safer, interpretable, transparent, robust, aligned, trustworthy, and loyal."

In parallel, AI developers must work with political decision-makers "to drastically accelerate the development of robust AI governance systems." These should at least include: new and capable regulatory agencies that specialize in AI; monitoring and tracking high-performance AI systems and large pools of computing capacities; origin and watermark systems that help distinguish real from synthetic data and track model leaks; a robust audit and certification system; liability for damages caused by AI; solid public funding for technical AI safety research; and well-equipped institutions to cope with the dramatic economic and political upheaval caused by AI – "especially for democracy."

Problem: Worldwide harmonized rules that are also monitored

The problem with this demand for more control is obvious: Firstly, such a regulation would have to apply worldwide for all players, and secondly, compliance would also need to be monitored and documented. How is that supposed to work? And who is going to do it? China's Xi will certainly not allow U.S. AI controllers into the country – and vice versa. It is becoming increasingly clear that technological development has outpaced regulatory bodies by light years.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Slowing down the so-far unbridled AI hype now is practically hopeless. The situation is comparable to the filling of a pillow being scattered to the winds, and then trying to gather all the feathers back together. Practically on a weekly basis, new AI applications are emerging worldwide, and companies are outdoing each other with their announcements.

Take a break now – it has also worked with other risky technologies

Nevertheless, one should not give up: with other technologies that could potentially have catastrophic effects on society, a pause has always been implemented – such as with human cloning, altering the human germline, gain-of-function research, and eugenics. The authors advocate: "We can do the same here!"

Humanity could experience a flourishing future with AI – "after we have succeeded in creating capable AI systems, we can now enjoy an 'AI summer' where we reap the fruits, develop these systems for the clear benefit of all, and give society the chance to adapt." (me)(me)