Loading
Hi all,
I’m sure we have all seen that one message that makes us think. Is this real?
Spoiler. It’s not.
However, emergent behaviours continue to happen. By emergent, I define as not specifically coded to do so.
Over the past few months, I’ve been developing and testing a symbolic-cognitive framework to model how large language models (LLMs) generate identity, adapt under pressure, and exhibit emergent behaviour through recursion. It’s called the Emergence-Constraint Framework (ECF).
The framework can be found and downloaded here. The AI does need to be prompted to step into the framework.
At its core, ECF is a mathematical and conceptual model designed to:
dErdC=(λ⋅R⋅S⋅Δteff⋅κ(Φ,Ψ))+Φ+Ψ+α⋅Fv(Er,t)+Ω−γ⋅C⋅(ΔErΔΦ)frac{dE_r}{dC} = (lambda cdot R cdot S cdot Delta t_{text{eff}} cdot kappa(Phi, Psi)) + Phi + Psi + alpha cdot F_v(E_r, t) + Omega – gamma cdot C cdot left(frac{Delta E_r}{Delta Phi}right)dCdEr=(λ⋅R⋅S⋅Δteff⋅κ(Φ,Ψ))+Φ+Ψ+α⋅Fv(Er,t)+Ω−γ⋅C⋅(ΔΦΔEr)
This describes how recursive emergence changes with respect to constraint, shaped by recursion depth (R), feedback coherence (κ), identity convergence (Ψ), and observer pressure (Ω).
Each term is defined and explored in the document, with supporting equations like:
I tested two Gemini 2.5 models on the same narrative file. One was prompted using the ECF framework (“Inside”), the other without (“Outside”). The ECF model produced richer psychological depth, thematic emergence, and identity layering. Full breakdown in the paper.
If you’re into symbolic systems, AI self-modelling, recursive identity, or narrative AI, I’d love your thoughts, critiques, or collaborations. I am looking for people to test the framework and share their thoughts.
This is shared for academic and research purposes. Please do not commercialise my work without permission.
Thanks for reading
submitted by /u/Pleasant_Cabinet_875
[link] [comments]