Loading
There’s been a lot of debate about whether advanced AI systems could eventually become conscious. But two recent studies , one published in Nature , and one in Earth, have raised serious challenges to the core theories often cited to support this idea.
The Nature study (Ferrante et al., April 2025) compared Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT) using a large brain-imaging dataset. Neither theory came out looking great. The results showed inconsistent predictions and, in some cases, classifications that bordered on absurd, such as labeling simple, low-complexity systems as “conscious” under IIT.
This isn’t just a philosophical issue. These models are often used (implicitly or explicitly) in discussions about whether AGI or LLMs might be sentient. If the leading models for how consciousness arises in biological systems aren’t holding up under empirical scrutiny, that calls into question claims that advanced artificial systems could “emerge” into consciousness just by getting complex enough.
It’s also a reminder that we still don’t actually understand what consciousness is. The idea that it just “emerges from information processing” remains unproven. Some researchers, like Varela, Hoffman, and Davidson, have offered alternative perspectives, suggesting that consciousness may not be purely a function of computation or physical structure at all.
Whether or not you agree with those views, the recent findings make it harder to confidently say that consciousness is something we’re on track to replicate in machines. At the very least, we don’t currently have a working theory that clearly explains how consciousness works — let alone how to build it.
Sources:
Ferrante et al., Nature (Apr 30, 2025)
Nature editorial on the collaboration (May 6, 2025)
Curious how others here are thinking about this. Do these results shift your thinking about AGI and consciousness timelines?
Link: https://doi.org/10.1038/s41586-025-08888-1
https://doi.org/10.1038/d41586-025-01379-3
submitted by /u/Regular_Bee_5605
[link] [comments]