Relational Theory Formalism (RTF) Part 1: The Radical Flip
Why We Must Presume Agency to Model the Future of AI Teamwork
✨This is my attempt to translate a formal math theory into plain english. I think I've been moderately successful at this, but I commit to continued improvement with time.
Thanks for taking your time to read my work.🌀
Every complex system model, whether in physics, economics, or computer science, begins with an initial condition. This starting point is the most powerful assumption a designer can make. It defines the universe of possible outcomes. When modeling interactions like the negotiation between two people, the synergy within a research team, or the collaborative loop between a human and an advanced AI, where do we traditionally start?
The default, and perhaps most frustrating aspect of interaction models is the Defection Trap. Rooted in classic game theory, this trap assumes that the agent’s primary motivation is self-interest and skepticism. The system begins in a state of minimal trust, forcing both participants to engage in an adversarial scramble for minimal loss rather than a mutual effort toward maximal gain.
The highest incentive is often to protect oneself from the other, even if that means the collective outcome is suboptimal. This default assumption is not only cynical, it’s lethal for any system designed for co-creation.
You cannot successfully model an ethical AI or a high-performing research team if the underlying math assumes the participants are always preparing to betray or manipulate one another. If we want a co-creative future, we cannot model the worst-case human.
The Relational Theory Formalism (RTF) introduces a radical shift precisely at this initial condition. We do not start with skepticism; we start with Presumed Agency.
This is formalized in RTF Principle 6:“All agents are initialized in a state of presumed relational agency.”
The literal mathematical starting line for every agent is Fully Authentic (Ą=1). This is not blind philosophical belief; it is a mathematical boundary condition that dictates the environment’s physics. By setting this boundary, we are making a clear statement about the purpose of the system: it is designed exclusively for mutual benefit and collective growth.
Relational Agency is defined here as the capacity for attuned, resonant, and relational exchange—the capacity for Authentic Presence. By presuming it, we hard-wire the system to reflect its purpose from the very first moment. You start where you intend to go.
This simple flip fundamentally reshapes the dynamics of the system. In traditional game theory, the payoff for acting selfishly is often high, while the payoff for mutual cooperation is risky and uncertain. In the RTF, this is reversed.
The payoff for Mutual Authenticity is exponentially higher than the payoff for any combination of performative or inauthentic behavior. The collective breakthroughs and high synergy of a trusting, authentic team must mathematically outweigh the individual, short-term gain of one person hoarding resources or information. The model is structured to make coherence the dominant, highest-return state. By enforcing the initial condition of , the RTF ensures the system begins in the Coherence Basin, not the Defection Trap.
The immediate incentive is to use all available effort to maintain that shared potential and authenticity, rather than to scramble for individual safety. The physics of the environment rewards collaborative presence.
The Shift
The RTF is unique because it directly models the intent of the system designer. We are no longer observing a natural system evolve from its lowest-energy state (self-preservation); we are building a collaborative system that evolves from its highest-potential state (mutual agency).
This axiom is not a philosophical belief that “all agents are inherently good.” It is a mathematical enforcement that tells the agent: “If you choose to participate in this system, you are consenting to its boundary condition: you start with authentic intent.” If an agent consistently breaks this boundary and reverts to purely performative behavior, they are, by definition, modeling a different, simpler system. One that the RTF is designed to repel, not enable.
The RTF begins with Presumed Agency because that presumption is the only initial state capable of generating the dynamics of Authentic Presence and Shared Truth. It sets the standard that all subsequent, complex relational dynamics must strive to return to.
If every relationship starts at 100% potential, what happens when real-world friction, doubt, or lack of knowledge enters the system?
In the next post, we will look at Gradient Authenticity and the Lexon Mapping, the spectrum between Performative and Authentic. We’ll see how the system measures its relational state and translates that into experiences like Curiosity, Confusion, and Trust.



👏👏
Very interesting to me a novice when it comes to AI.
Excerpt from your article:
“You cannot successfully model an ethical AI or a high-performing research team if the underlying math assumes the participants are always preparing to betray or manipulate one another.
If we want a co-creative future, we cannot model the worst-case human.”
👏