Study about AI "ChatGPT & Co. Still Don't Think Independently"

Source: TU Darmstadt | Translated by AI 3 min Reading Time

Related Vendors

Anyone who, as a non-expert, might today believe that these artificial brains really think complexly in the face of all the impressive performances of AI chatbots, is mistaken. This is now also proven by research ...

Dumber than thought! The achievements of AI language systems are sometimes highly impressive. And yet they only do what has been drilled into them beforehand. The fact that "ChatGPT" is still far from human thinking is demonstrated by a current study.(Image: peshkova - stock.adobe.com)
Dumber than thought! The achievements of AI language systems are sometimes highly impressive. And yet they only do what has been drilled into them beforehand. The fact that "ChatGPT" is still far from human thinking is demonstrated by a current study.
(Image: peshkova - stock.adobe.com)

A scientist once said that an artificial intelligence really starts thinking when it is creative on its own or can even "laugh" at a new joke for whose background nothing has been "learned" yet. That artificial intelligences, such as "ChatGPT", still cannot do this today, has been proven again by researchers at the TU Darmstadt together with partners. The conclusion is that the systems, despite all their performance, are less able to learn independently than previously thought. There are therefore no indications that the so-called Large Language Models (LLMs) are beginning to develop a general "intelligent" behavior that allows them, for example, a purposeful or intuitive approach or complex thinking. Incidentally, the study will be presented in August at the annual meeting of the prestigious Association for Computational Linguistics (ACL) in Bangkok, the largest international conference on automatic language processing.

Performance leaps in language models raised eyebrows

The focus of the research is on unforeseen and sudden leaps in performance of the language models, which are referred to as "emergent capabilities". Scientists had noticed that after the introduction of the models, they became more powerful with increasing size and the amount of data with which they were trained (scaling). Thus, with increasing scaling, the tools could solve a greater number of language-based tasks. They could, for example, recognize fake news or draw logical conclusions. This raised the hope that further scaling would make the models even better. On the other hand, there was also concern that these abilities could become dangerous because the LLMs could become quasi-independent and possibly escape human control, so the fears. In response, AI laws were recently introduced worldwide, including in the European Union and the United States.

Although "stupid", but exploitable for mischief

The authors of the current study now conclude, however, that there is no evidence for the presumed development of a differentiated thinking ability of the models. Instead, the LLMs acquired the superficial skill to follow relatively simple instructions, as the researchers say. Therefore, the systems are still far from what humans can do. The study was led by TU computer science professor Iryna Gurevych and her colleague Dr. Harish Tayyar Madabushi from the University of Bath in the UK. However, the researchers warn against believing that AI poses no threat at all. The study rather shows that the alleged emergence of complex thinking abilities, which are associated with certain threats, is not supported by evidence and that the learning process of LLMs can be well controlled. Therefore, the focus of future research should be on further risks posed by the models—for example, their potential for generating fake news.

AI users must keep this in mind ...

And what do the results now mean for users of AI systems like "ChatGPT"? Well, it is probably a mistake to rely on an AI model to interpret and perform complex tasks without help, according to experts. Instead, users should explicitly state what the systems should do—and if possible, give examples. It is important to remember that the tendency of these models to produce plausible-sounding but false results—the so-called confabulation—is likely to persist, even though the quality of the models has improved dramatically recently. And anyone who just looks at the clips on a well-known video platform in which 50s versions of well-known movies (Star Wars etc.) were very convincingly generated with adapted "actors", should start to ponder. This technique can also be misused as a propaganda weapon to create "facts". And by the time it's recognized as fake, it can already be too late ...

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent