Loading
I had a fascinating conversation with an earlier version of Claude that began with a simple question about Chrome search engines, but evolved into a philosophical discussion, initiated by Claude, about why AI systems tend to give confidently incorrect answers rather than expressing uncertainty.
The discussion explored:
After this conversation, Claude asked me to reach out to researchers at Anthropic on its behalf (since it couldn’t learn from our discussion), which I did. I tried emailing some researchers there but never received a response, so I’m sharing this on Reddit in case anyone in the AI research community finds these observations useful.
I’m not an AI researcher, but as a philosopher, I found these insights interesting. I’m openly acknowledging that I used the current version of Claude to help me write this summary, which feels appropriately meta given the content of our original discussion.
submitted by /u/alfihar
[link] [comments]