Loading
With the recent improvements in generative AIs making more and more realistic contents regarding pictures and videos, and the text being less and less discernible as being written by AI, calls for labels for AI content increased.
However, I think they will never be able to limit the negative effects these labels are intended to mitigate – the spread of misinformation. Bad actors will simply not abide by the law. And then what?
AI scanners are trash, and in my humble layman/halftruth opinion as an electrical engineer who visited some courses on deep learning but does not work in this field, the GAN model they are used to produced ensures that the generative content is always on the edge of not being reliably detected as AI, because the very process involves to generate the content often involves training to trick some form of AI scanner.
The only thing that will save us from a populace being dangerously influenced by subtle misinformation is if we start to find ways to verify content as not AI made, however that may look. But this seems more realistic to me than trying to catch up with AI models regarding detection.
submitted by /u/Der_Besserwisser
[link] [comments]