J A B B Y A I

Loading

I was thinking about the concept of AI safety, and it occurred to me that it would make sense when covering edge cases for AI researchers to develop models that have intentionally unaligned training and fine-tuning. Makes me wonder, if they do exist how would they fair in comparison to models that have been aligned to be more friendly and conversational?

submitted by /u/Confident_Hand5837
[link] [comments]

Leave a Comment