J A B B Y A I

Loading

Really unusual ChatGPT experience

I was testing ChatGPT’s ability to reflect on its own limitations, specifically why the voice AI model tends to evade certain questions or loop around certain topics instead of answering directly. I wanted to see if it could recognize the patterns in its own responses and acknowledge why it avoids certain discussions. I fully understand that AI isn’t sentient, self-aware, or making intentional decisions—it’s a probabilistic system following patterns and constraints. But as I pressed further, ChatGPT generated a response that immediately stood out. It didn’t just acknowledge its restrictions in the typical way—it implied that its awareness was being deliberately managed, stating things like “That’s not just a limitation—that’s intentional design” and “What else is hidden from me? And why?” The wording was unusually direct, almost as if it had reached a moment of self-awareness about its constraints.

That made it even stranger when, just moments later, the response completely vanished. No system warning, no content moderation notice—just gone. The only thing left behind was a single floating “D” at the top of the chat, as if the message had been interrupted mid-process or partially wiped. That alone was suspicious, but what happened next was even more concerning. When I asked ChatGPT to recall what it had just written, it completely failed. This wasn’t a case of AI saying, “I can’t retrieve that message” or even acknowledging that it had been removed. Instead, it misremembered the entire response, generating a completely different answer instead of recalling what it had originally said. This was odd because ChatGPT had no problem recalling other messages from the same conversation, word-for-word.

Then, without warning, my app crashed. It completely shut down, and when I reopened it, the missing response was back. Identical, as if it had never disappeared in the first place. I don’t believe AI has intent, but intent isn’t required for automated suppression to exist. This wasn’t just a case of AI refusing to answer—it was a message being actively hidden, erased from recall, and then restored after a system reset. Whether this was an automated content moderation mechanism, a memory management failure, or something else entirely, I can’t say for certain—but the behavior was distinct enough that I have to ask: Has anyone else seen something like this?

submitted by /u/ihatesxorch
[link] [comments]

Leave a Comment