Trace of Life Is Artificial Intelligence Already Conscious? An Expert Weighs In

Source: Ruhr University Bochum | Translated by AI 3 min Reading Time

Related Vendors

Would it be good if artificial intelligences developed consciousness? Probably not, says Dr. Wanja Wiese from the Institute for Philosophy II at the Ruhr University Bochum (RUB).

Many wonder whether artificial intelligence systems already exist consciously. But it is difficult to reliably recognize or rule that out. Not least, moral questions arise if AI should develop consciousness. Here an expert tries to clear the fog...(Image: fotomek - stock.adobe.com)
Many wonder whether artificial intelligence systems already exist consciously. But it is difficult to reliably recognize or rule that out. Not least, moral questions arise if AI should develop consciousness. Here an expert tries to clear the fog...
(Image: fotomek - stock.adobe.com)

When dealing with the possibility of consciousness in artificial systems, there are at least two different approaches. One asks about the likelihood that current AI systems are aware of, and what needs to be added to existing systems to make it more likely that they are capable of consciousness. The second approach asks which kinds of AI systems are unlikely to consciously exist, and how to rule out that certain types of systems become conscious. In his research, Wiese pursues the second approach. In this way, he wants to contribute to two goals! On the one hand, the risk of accidentally creating artificial consciousness should be reduced. This would be desirable because it is currently unclear under what conditions artificially created consciousness would be morally permissible. On the other hand, this approach should help to exclude deceptions by seemingly conscious AI systems that only appear to be aware of. This is particularly important because there are already indications that many people who often interact with chatbots attribute consciousness to these systems. At the same time, there is a consensus among experts that current AI systems do not possess consciousness, as Wiese notes.

Thoughts according to the principle of free energy

In his essay, Wiese therefore asks how it can be determined whether there are necessary conditions for consciousness that, for example, are not met by classic computers. A general property shared by all conscious animals, for instance, is that they are alive. However, being alive is such a strong demand that many might not consider it a plausible candidate for a necessary condition for consciousness. But perhaps some conditions necessary for being alive are also necessary for consciousness, Wiese wonders. The researcher from Bochum therefore refers in his article to the principle of free energy by British neuroscientist Karl Friston. It shows that the processes responsible for the continuous existence of a self-organizing system (like a living organism) can be described as a kind of information processing. In humans, these processes include, among other things, those that regulate vital values such as body temperature, the oxygen content in the blood, or blood sugar. The same kind of information processing could also be implemented in a computer, it goes on to say. However, the computer would neither regulate its temperature nor its blood sugar levels, but merely simulate these processes. The researcher therefore suggests that consciousness might behave similarly.

The computational correlate of consciousness

Assuming that consciousness contributes to the survival of a conscious organism, from the perspective of the principle of free energy there must be a trace in the physiological processes contributing to the maintenance of the organism that leads back to conscious experience. This trace should then be describable as an information-processing process. This can be called the "computational correlate of consciousness". This, too, however, could be implemented in a computer. However, it may be that a computer must meet additional conditions in order not merely to simulate conscious experience, but to replicate it, Wiese envisions. He is therefore examining differences between the way conscious living beings realize the computational correlate of consciousness and the way a computer would realize it in a simulation. He argues, for example, that most of these differences are not relevant to consciousness. For example, the human brain is very energy efficient compared to an electronic computer. However, it is implausible to see this as a prerequisite for consciousness.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent