Trace of Life Does artificial intelligence already have consciousness? An expert answers

Source: Ruhr University Bochum | Translated by AI 3 min Reading Time

Related Vendors

Would it be good if artificial intelligences developed consciousness? Probably not, thinks Dr. Wanja Wiese from the Institute for Philosophy II at Ruhr University Bochum (RUB).

Many wonder whether artificial intelligence systems already exist consciously. But it's difficult to definitely recognize or rule out. Not least, moral questions arise if AI should develop consciousness. Here an expert tries to clear the fog ...(Image: fotomek - stock.adobe.com)
Many wonder whether artificial intelligence systems already exist consciously. But it's difficult to definitely recognize or rule out. Not least, moral questions arise if AI should develop consciousness. Here an expert tries to clear the fog ...
(Image: fotomek - stock.adobe.com)

When engaging with the possibility of consciousness in artificial systems, there are at least two different approaches. One asks how likely it is that current AI systems are aware of themselves, and what needs to be added to existing systems to make it more likely that they are capable of consciousness. The second approach asks which types of AI systems are likely not to exist consciously, and how to exclude that certain types of systems become capable of consciousness. Wiese pursues the second approach in his research. Thus, he wants to contribute to two goals! For one, the risk of inadvertently creating artificial consciousness should be reduced. This would be desirable because it is currently unclear under which conditions an artificially created consciousness is morally permissible. On the other hand, this approach should help to preclude deceptions by apparently conscious AI systems that just appear to be self-aware. This is particularly important because there are already indications that many people who often interacted with chatbots attribute consciousness to these systems. At the same time, there is a consensus among experts that current AI systems do not possess consciousness, as Wiese notes.

Thoughts according to the principle of free energy

In his essay, Wiese therefore asks how one can find out whether there are necessary conditions for consciousness that, for example, classic computers do not fulfill. A general property that all conscious animals share is that they are indeed alive. However, being alive is such a stringent requirement that many would not regard it as a plausible candidate for a necessary condition for consciousness. But perhaps some conditions necessary to be alive are also necessary for consciousness, Wiese wonders. The researcher from Bochum therefore refers in his article to the principle of free energy of the British neuroscientist Karl Friston. It shows that the processes that ensure the ongoing existence of a self-organizing system (like a living organism) can be described as a form of information processing. In humans, these include processes that regulate life-essential values such as body temperature, the oxygen level in the blood, or blood sugar. The same kind of information processing could also be implemented in a computer, it continues. However, the computer would neither regulate its temperature nor the blood sugar levels, but merely simulate these processes. The researcher therefore suggests that consciousness could behave similarly.

The computational correlate of consciousness

Assuming that consciousness contributes to the survival of a conscious organism, from the perspective of the principle of free energy, there must be a trace in the physiological processes that contribute to the organism's maintenance, which leaves the conscious experience. This trace should then be describable as an information-processing process. This can be called the "computational correlate of consciousness". This, too, could be realized in a computer. However, it may be that additional conditions need to be fulfilled in a computer for the computer not just to simulate conscious experience, but to replicate it, Wiese envisions. He therefore investigates differences between the way conscious beings realize the computational correlate of consciousness, and the way a computer would realize it in a simulation. He argues, for example, that most of these differences are not relevant to consciousness. For instance, the human brain is very energy-efficient compared to an electronic computer. However, it is implausible to consider this as a precondition for consciousness.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent