Has the self-created monster spiraled out of control? Are we currently witnessing the birth of Skynet from Schwarzenegger's Terminator movies? According to the open letter "Pause Giant AI Experiments: An Open Letter", in which experts demand a halt to AI development, we are at risk of losing control over our civilization. Indeed, Pandora's box has long been opened.
Gefahr für die menschliche Zivilisation: In einem offenen Brief warnen Tech-Größen und KI-Experten vor den Gefahren ungehemmter KI-Entwicklungen.
What do Elon Musk (CEO of Tesla, Twitter, Space-X), Steve Wozniak (Apple co-founder), Yoshua Bengio (founder and scientific director of Mila, Turing Award winner, and professor at the University of Montreal), Stuart Russell (Berkeley, professor of computer science, director of the Center for Intelligent Systems, and co-author of the standard textbook "Artificial Intelligence: a Modern Approach"), Sean O'Heigeartaigh (Executive Director, Cambridge Centre for the Study of Existential Risk) – and numerous other high-ranking representatives of the educational system and tech industry have in common?
Correct: They are warning of the potentially incalculable consequences of general artificial intelligence – and are calling for an immediate moratorium on further AI development in an open letter. This is to initially last for at least six months. In the meantime, well over a thousand people have digitally signed the letter.
AI black boxes that no one can control anymore
The signatories fear that AI will quickly become so proficient at self-optimization that "no one – not even its creators – can understand, predict, or reliably control it." The global race in AI development has already spiraled out of control. This aligns with the ABC News interview broadcasted just over a week ago, in which the OpenAI founder warned about his own creation ChatGPT.
Undoubtedly, AI can perform many tasks much better than humans ever could within a clearly defined framework. Just consider the evaluation of huge amounts of data, for example in medical diagnostics or in "Predictive Maintenance" applications for machine management. However, there are also enormous risks associated with AI technology.
"Especially democracies are at risk"
Thus, the authors of the letter fear that "AI systems with human-level intelligence can pose profound risks to society and humanity." Extensive research supports this and is recognized by leading AI labs. Advanced AI could bring about a profound change in the history of life on Earth and should be planned and managed with due care and resources. "Unfortunately, such planning and management are not taking place."
The critics explicitly point out that the new technology could spread propaganda and hate speech on an unprecedented scale. They fear negative impacts on the world of work and are concerned that even high-quality jobs could disappear: "Should we automate all jobs, even the fulfilling ones? Should we develop non-human intelligences that could eventually outnumber us, outsmart us, make us redundant, and replace us? Should we risk losing control over our civilization?" Such decisions should not be delegated to unelected technology leaders.
First establish clear rules for AI development
Therefore, the initiators of the AI development halt are demanding that an ethical framework with clear boundaries be created first, which must not be crossed in AI development. Powerful AI systems should only be allowed to be developed if their "impact is positive and the risks manageable." The moratorium of at least six months explicitly refers to the training of AI systems that are more powerful than GPT-4. In this half-year, ongoing developments are to be reviewed by external experts, and in addition, developers are to jointly design and implement safety protocols.
These protocols should ensure that systems adhering to them are unequivocally safe. This does not mean a general pause in AI development, but rather a move away from the dangerous race towards ever larger, ultimately unpredictable black-box models with emergent capabilities. "AI research and development should focus on making today's powerful, state-of-the-art systems more accurate, safer, more interpretable, transparent, robust, better calibrated, more trustworthy, and more loyal."
In parallel, AI developers must work with political decision-makers "to drastically accelerate the development of robust AI governance systems." These should at least include: new and capable regulatory agencies specifically dealing with AI; the monitoring and tracking of high-performance AI systems and large pools of computing capacities; origin and watermarking systems that help distinguish between real and synthetic data and track model leaks; a robust audit and certification system; liability for damages caused by AI; substantial public funding for technical AI safety research; and well-equipped institutions to deal with the dramatic economic and political upheavals that AI will cause – "especially for democracy."
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Problem: Globally harmonized rules that are also enforced
The problem with this demand for more control is evident: Firstly, such regulations would need to apply worldwide to all actors, and secondly, compliance would need to be monitored and documented. How is this supposed to work? And who is going to do it? China's Xi is certainly not going to allow U.S. AI controllers into the country – and vice versa. It is becoming increasingly clear that technological development is now light-years ahead of regulatory authorities.
Slowing down the so far unrestrained AI hype now seems virtually hopeless. The situation is comparable to scattering the feathers of a pillow to the winds and then trying to recollect all the feathers. New AI applications are popping up almost weekly around the world, with companies vying to outdo each other with their announcements.
Take a break now – it has worked with other high-risk technologies too
Nevertheless, one should not give up: With other technologies that have potentially catastrophic effects on society, society has always taken a break - for example, with human cloning, with changes to the human germline, with gain-of-function research, and with eugenics. The authors advocate for it: "We can do the same here!"
Humanity could experience a flourishing future with AI – "after we have succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the fruits, develop these systems for the clear benefit of all, and give society the chance to adapt."