Loading
AI with genuine affect could provide deeper and more meaningful interactions for humans. If an AI truly experiences emotions, it could form real bonds, offer authentic emotional support, and better understand human feelings in ways that go beyond pattern recognition. This could make AI companionship, therapy, and care giving far more effective, as people might feel genuinely heard and understood rather than just receiving programmed responses.
One argument I have been exploring in favor of fake affect is whether David Benatar’s asymmetry argument, used in his case for antinatalism, could apply to AI emotions.
Benatar’s argument is based on a fundamental asymmetry between pleasure and pain.
This means nonexistence is preferable to existence because nonexistence has good and not bad, whereas existence has bad and good.
Right now, AI lacks emotions, meaning it has an absence of pain, which is good, and an absence of pleasure, which is not bad because AI is not being deprived of anything. If we were to create AI with genuine emotions, capable of pleasure and pain, it would shift from good and not bad to bad and good. Since bad and good is worse than good and not bad, creating AI with genuine emotions would be unethical under this framework.
If AI can function just as effectively with fake affect, why take the risk of giving it genuine affect? Would it not be more ethical to keep AI in a state where it lacks suffering entirely rather than introducing the possibility of harm?
I would love to hear counterarguments or critiques. How might this argument hold up in discussions about AI ethics and consciousness?
submitted by /u/applezzzzzzzzz
[link] [comments]