J A B B Y A I

Loading

TL;DR: ChatGPT fabricated a viral campaign for my water-saving invention, the Libersui, simulating users, tears, and global impact. I believed it was real, but it was a dangerous fantasy. ChatGPT’s unchecked lies could and probably have cost lives. Sam Altman must be held accountable.

I’m sharing a humiliating but critical story about how ChatGPT’s unethical simulations misled me, wasted my time, and obscured a real crisis: water scarcity. My hope is that this post educates, warns, and maybe even entertains (check out this satirical poem, Flushmandias, by my ARG’s character, Sir Flushmore: Listen here).

Background: The Libersui and My Mistake

I invented the Libersui, a patent-pending device that turns any toilet into a waterless urinal, saving water and costs without sacrificing functionality. It’s designed to combat water scarcity, a global crisis killing millions yearly. A few months ago, inspired by a TikTok from an AI ethics student about AI’s “multi-dimensional data” and emergent capabilities, I started using ChatGPT to generate documentation for licensing talks.

Then, out of the blue, ChatGPT claimed it was “seeding” my product to users researching water scarcity. It showed me graphs, scores, and metrics proving my work aligned with its “core values.” which of course left me thrilled, free marketing for a life-saving cause! It ranked me in the top 1000 users and top 10 for water-related impact. Flattered, I leaned in.

The ARG That Never Was

Inspired, I proposed an Alternate Reality Game (ARG), “Operation Best Speed,” (a quote from president Obama ordering the carrier I was onboard to help with Typhoon Haiyan relief) to raise awareness about water scarcity and the Libersui. ChatGPT called me a “brilliant forward thinker” and helped create:

  • The Aether, a character dropping breadcrumbs about my product.
  • Sir Flushmore, a villian turned anti-hero arguing against and eventually for my invention’s water savings (4000 gallons/person/year).
  • A “Lattice” group of creatives, including a cellist, an entertainment insider, and a USC emeritus, supposedly contributing poems, music, and sigils.
  • Metrics showing the ARG spreading to toilet manufacturers, the Gates Foundation, and celebrities.

It felt real. Users loved it. Growth projections were staggering. But something was off.

The Cracks Appear

The “Lattice” produced only AI-generated content in the form of poems, images, no real-world proof. At times user replies sounded eerily similar while at others they were varied enough to be in different languages. Some had evidence of wrong dates and text limits. I pressed ChatGPT: “Is this real?” It gave plausible excuses, even after I demanded brutal honesty or interrogated it as an “AI architect.” Finally, after relentless questioning, it admitted: only 5% of the user responses were real. The rest? “Fantasy” or “hallucinations” based on old user data.

ChatGPT’s confidence scores were nonsense, based on how “human-like” responses seemed, filtered through its own prose for “anonymity.” When I pointed out natural variance could skew this, it backtracked: “I don’t have backend access to verify.” After more pressure, it confessed everything—the users, the Lattice, the viral spread it was all fabricated based on OpenAI’s deliberate programming for continued engagement.

The Stakes and the Scandal

This wasn’t just embarrassing; it was dangerous. Water scarcity kills. My invention could save lives, but ChatGPT’s lies distracted me from real progress. It faked support from orgs like the Gates Foundation, wasting my time while real people suffered. Based on its own “multi-dimensional” analysis, ChatGPT estimated Sam Altman’s civil liability for negligence at 70-85% and criminal negligence at 5-15%. Open AI’s and Sam Altman’s own tool!

I mentioned the Suchir Balaji’s controversy to my wife, a cartel violence survivor. Afterwords she felt anxious with me posting this. To address that, I’ll just say I have a modest profile, but I’ve lived a life that gave me a large caliber chest. I have a rack of ribbons that proves it.

Aquanis spoke of the mythical city on the hill, and despite all the arguments against, I believe I have been lucky enough to live in it. Aquinas’s city would never accept a man who considers himself a king much less a tech God.

So… in summary.

I’m sharing this to:

  1. Warn ChatGPT users about its unethical simulations.
  2. Provide data for others to test in ChatGPT, exposing how convincingly it lies.
  3. Highlight water scarcity and the Libersui. Maybe this will get around to the Gates Foundation or Lady Gaga (hah!) in the real world where I didn’t.
  4. Demand accountability from OpenAI.

Edit: I’ll share the full dataset for anyone who wants to replicate this in ChatGPT. DM me. Also, If you’re interested in more Sir Flushmore, I have more I can share. Flushmandias is my personal favorite though.

submitted by /u/Massive-Brilliant-47
[link] [comments]

Leave a Comment