Loading
I just want to say that I don’t have anything against AI art or generative art. I’ve been messing around with that since I was 10 and discovered fractals. I do AI art myself using a not well known app called Wombo Dream. So I’m mostly talking about using this to deal with misinformation which I think most will agree is a problem.
The way this would work is you would have real images taken from numerous sources including various types of art, and then you would have a bunch of generated images, and possibly even images being generated as the training is being done. The task of the AI would be to decide if it’s generated or made traditionally. I would also include the metatdata like descriptions of the image, and use that to generate images via AI if it’s feasible. So every real image would have a description that matches the prompt used to generate the test images.
The next step would be to deny the AI access to the descriptions so that it focuses in on the image instead of keying in on the description. Ultimately it might detect certain common artifacts that generative AI creates that may not even be noticeable to people.
Could this maybe work?
submitted by /u/Memetic1
[link] [comments]