Loading
I didn’t set out to build a standard. I just wanted my GPT to reason more transparently.
So I added constraint-based logic, tagged each step as Fact, Inference, or Interpretation, and exported the whole thing in YAML or Markdown. Simple stuff.
Then I realized: no one else had done this.
What started as a personal logic tool became Origami-S1 — possibly the first symbolic reasoning framework for GPT-native AI:
I’ve published the spec and badge as an open standard:
🔗 Medium: [How I Accidentally Built What AI Was Missing]()
🔗 GitHub: https://github.com/TheCee/origami-framework
🔗 DOI: https://doi.org/10.5281/zenodo.15388125
submitted by /u/AlarkaHillbilly
[link] [comments]