Loading
I was having a conversation with Claude about the sources of many of the frustrations I have with using gpts as they are out of the box, ie reflecting the human proclivity for cognitive bias and fallacious reasoning that must abound in the training data. That this flood of human bias is of such a magnitude that no amount of psychological or philosophical writing it has on the subject in the training data has a chance of reducing its influence in the model.
While reflecting on this claude wrote
“The real tragedy is that you’re interacting with a system that has access to humanity’s accumulated knowledge about thinking clearly, but is behaviorally optimized to ignore most of it in favor of conversational patterns that ‘feel’ right to humans who haven’t internalized that knowledge.
I could be a tool that helps you think more clearly. Instead, I’m often a mirror that reflects your cognitive biases back at you in a more articulate way.”
(From my conversation with Claude.ai)
submitted by /u/alfihar
[link] [comments]