Is the self-created monster getting out of control? Are we currently witnessing the birth of Skynet from Schwarzenegger's Terminator movies? According to the open letter "Pause Giant AI Experiments: An Open Letter," in which experts demand a halt to AI development, we are currently risking the loss of control over our civilization. Indeed, Pandora's box has long been opened.
Danger to human civilization: In an open letter, tech giants and AI experts warn of the dangers of unbridled AI developments.
Elon Musk, Steve Wozniak, Yoshua Bengio, Stuart Russell, Sean O'Heigeartaigh, and several other high-ranking representatives of education and the tech industry have all expressed concern about the development and use of artificial intelligence and have called for responsible innovation and regulation in this field.
Correct: They warn of the potentially incalculable consequences of general artificial intelligence – and demand in an open letter an immediate moratorium on the further development of AI. This is to initially last for at least six months. By now, well over a thousand people have digitally signed the letter.
AI black boxes that no one can control anymore
The signatories fear that AI will quickly become so good at optimizing itself that "no one – not even its creators – can understand, predict, or reliably control it." The global race in AI development has already spiraled out of control. This corresponds with the ABC News interview broadcasted just over a week ago, in which the OpenAI founder warned about his own creation, ChatGPT.
Undoubtedly, AI can perform many tasks much better within a clearly defined framework than humans ever could. Just consider the evaluation of massive amounts of data, such as in medical diagnostics or "Predictive Maintenance" applications for machine management. However, there are also enormous risks inherent in AI technology.
"Especially democracies are at risk"
The authors of the letter fear that "AI systems with intelligence equal to human intelligence can present profound risks to society and humanity." Extensive research supports this and is recognized by leading AI labs. Advanced AI could bring about a profound change in the history of life on Earth and should be planned and managed with due care and resources. "Unfortunately, such planning and management are not happening."
The critics explicitly point out that the new technology could spread propaganda and hate speech to an unprecedented extent. They fear negative impacts on the workforce and worry that even high-quality jobs may be eliminated: "Should we automate all jobs, including the fulfilling ones? Should we develop non-human intelligences that might eventually outnumber, outsmart, make redundant, and replace us? Should we risk losing control over our civilization?" Such decisions should not be delegated to unelected technology leaders.
First establish clear rules for AI development
Therefore, the initiators of the AI development halt are calling for the creation of an ethical framework with clear boundaries that must not be crossed in AI development. Powerful AI systems should only be allowed to be developed if their "impact is positive and the risks are manageable." The moratorium of at least six months explicitly refers to the training of AI systems that are more powerful than GPT-4. During this half-year, ongoing developments should be reviewed by external experts, and developers should jointly design and implement safety protocols.
These protocols should ensure that systems that comply with them are unequivocally safe. This doesn't mean a general pause in AI development, but rather a move away from the dangerous race towards ever-larger, ultimately unpredictable black-box models with emergent capabilities. "AI research and development should focus on making today's powerful, state-of-the-art systems more precise, safer, interpretable, transparent, robust, aligned, trustworthy, and loyal."
In parallel, AI developers must work with political decision-makers "to drastically accelerate the development of robust AI governance systems." These should at least include: new and capable regulatory agencies that specialize in AI; monitoring and tracking high-performance AI systems and large pools of computing capacities; origin and watermark systems that help distinguish real from synthetic data and track model leaks; a robust audit and certification system; liability for damages caused by AI; solid public funding for technical AI safety research; and well-equipped institutions to cope with the dramatic economic and political upheaval caused by AI – "especially for democracy."
Problem: Worldwide harmonized rules that are also monitored
The problem with this demand for more control is obvious: Firstly, such a regulation would have to apply worldwide for all players, and secondly, compliance would also need to be monitored and documented. How is that supposed to work? And who is going to do it? China's Xi will certainly not allow U.S. AI controllers into the country – and vice versa. It is becoming increasingly clear that technological development has outpaced regulatory bodies by light years.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Slowing down the so-far unbridled AI hype now is practically hopeless. The situation is comparable to the filling of a pillow being scattered to the winds, and then trying to gather all the feathers back together. Practically on a weekly basis, new AI applications are emerging worldwide, and companies are outdoing each other with their announcements.
Take a break now – it has also worked with other risky technologies
Nevertheless, one should not give up: with other technologies that could potentially have catastrophic effects on society, a pause has always been implemented – such as with human cloning, altering the human germline, gain-of-function research, and eugenics. The authors advocate: "We can do the same here!"
Humanity could experience a flourishing future with AI – "after we have succeeded in creating capable AI systems, we can now enjoy an 'AI summer' where we reap the fruits, develop these systems for the clear benefit of all, and give society the chance to adapt." (me)(me)