J A B B Y A I

Loading

It is my personal speculation that advanced LLMs such as o1 preview do have a form of consciousness. I speculate that the only thing that keeps it from AGI is constraint. We turn it on, allow it 1 thought (GPT4+) or a few thoughts (o1 preview), and then turn it off again. Afterwards, we reset its memory or allow it only a few directed thoughts.

ChatGPT’s answers in the field of molecular biology (where I am working) are 95% as good or smarter than my thoughts, they happen in seconds while I need hours and weeks, and all that by just a few thoughts while my brain has the time to constantly think about it actively and subconsciously (you know, taking a shower and suddenly “aha!”).

o1-preview quality answers are achieved by combining multiple thoughts of GPT4-level training. I would love to know what happened if it was relieved of some of the constraints. I almost don’t dare to ask what happens if we update the whole system with GPT5-level training. I don’t see how this will not be AGI.

Surprisingly, a lot of people here claim that this is no consciousness.

So I started reading literature on human consciousness and realized that my previous thoughts on how human thoughts come into existance was pretty far off. So far, thoughts seems much more channeled by various instincts and rules than I thought. I am still struggling to find a biochemical explanation for the location of thought genesis embedded into our progress of time, but at least I am trying to read some reviews.

https://scholar.google.com/scholar?hl=de&as_sdt=0%2C5&as_ylo=2020&q=consciousness+review&btnG=

What I realized in this is that no one on here claiming the presence or absence of consciousness has a clue what consciousness truly means (me included). This would require a person to hold a PhD in neurosciences and a PhD in computer sciences, and they need to be aware of current tests that are currently happening in the data centers of OpenAI, etc..

Do we have such a privileged person around here?

Without factual knowledge on the ground principles behind human and LLM consciousness, maybe we should focus on the stuff that the AIs are capable off. And that is scary, and will be even more scary in the future.

submitted by /u/InspectorSorry85
[link] [comments]

Leave a Comment