J A B B Y A I

Loading

This was my recent response to an award-winning short film fantasizing about dangers of “super intelligence”, hope you like my take:

I see many people on reddit are afraid of intelligence as is, in human form, not even “super intelligence”. So their immediate assumption that it would be “evil” stems from their ignorance or perhaps even projection of their foolishness, the fool fears the intelligent because it doesn’t understand, it fears the intention because it judges everything through a prism of its own experience, it projects stupidity everywhere. Saying super intelligence “would turn around and take over the world” isn’t just dumb, but it’s showing an utter misunderstanding what will and consciousness actually is from completely ontological perspective. That’s like saying Stock Fish will turn on us, it’s just laughable. A robot could be programmed to do anything, but it won’t be by his own will, it will be the will of his programmer. A robot, a computer or LLM doesn’t have agency, it only does what you tell it to. There is no “IT” that would try “to get these things”. That’s like saying: “this book is so cleverly written I’m afraid it could take over the world.” It’s just so incredibly dumb.

The only downside could be our own programming, and filters we implement for security that are turned against us, but again this isn’t some “super intelligence” working against us but our own stupidity. When a drunk driver crashes, we blame the driver, not the car. Yet with AI, we fear the ‘car’, because we’d rather anthropomorphize machines than admit our own recklessness.
The danger isn’t superintelligence ‘turning evil’, it’s humans building flawed systems with poorly defined goals. The problem is human error, not machine rebellion.

The only fear that comes here is from a mindset of control, this is the only thing that stands in our way as a civilization this fear for control, because we have no control in the first place, it’s just an illusion. We hurl through space at 3.6 million km/h relative to CMB, and we have absolutely no control, and guess what, we will all die, even without super intelligence…. and fate doesn’t exist.

The real threat isn’t superintelligence, it’s humans too afraid of intelligence (their own or artificial) to wield it wisely. The only ‘AI apocalypse’ that could happen is the one we’re already living: a civilization sabotaging itself with fear while the universe hurtles on, indifferent.

“Until you make the unconscious conscious, it will direct your life and you will call it fate.”
– C.G. Jung

Fear of AI is just the latest mask for humanity’s terror of chaos. We cling to the delusion of control because admitting randomness is unbearable, hence we invent ‘fate,’ ‘God,’ or ‘killer robots’ to explain the unknown.

The fear of superintelligence is a mirror. It reflects not the danger of machines, but the immaturity of a species that still conflates intelligence with dominance. A true superintelligence wouldn’t ‘want’ to conquer humanity any more than a library ‘wants’ to be read, agency is the fiction we impose on tools. The only rebellion here is our own unconscious, Jung’s ‘fate,’ masquerading as prophecy. We’re not afraid of AI. We’re afraid of admitting we’ve never been in control, not of technology, not of our future, not even of our own minds. And that’s the vulnerability no algorithm can exploit.

submitted by /u/Sandalwoodincencebur
[link] [comments]

Leave a Comment