Loading
![]() |
Leaving aside all the other ethical questions of AI, I’m curious about the pros and cons of LLM use by people with mental health challenges. In some ways it can be a free form of therapy and provide useful advice to people who can’t access help in a more traditional way. But it’s hard to doubt the article’s claims about delusion reinforcement and other negative effects in some. What should be considered an acceptable ratio of helping to harming? If it helps 100 people and drives 1 to madness is that overall a positive thing for society? What about 10:1, or 1:1? How does this ratio compare to other forms of media or therapy? submitted by /u/spongue |