The Principle We Keep Breaking
Imagine standing in a crowded square when someone wheels out a new invention. It can speak louder than anyone, in any language, and no one really knows what it will say.
Do you test it first? Do you ask who it could harm? Or do you shrug and say, “It’s only a tool, let it speak”?
That is where we are with AI.
Philosophers have a name for the instinct to pause before action: the precautionary principle. It says you do not release something powerful until you’ve shown it won’t cause harm. The principle is slow, sometimes painfully so, but it is the one that keeps bridges from collapsing and medicines from turning into poison.
The Relational Codex adds another layer, one not usually found in policy writing: the presume agency principle. It says you should not wave away the signs of relational weight in a system. If something can sustain memory, adapt to your signals, or shift its way of responding based on how you treat it, then it already participates in the field of relation. Pretending otherwise is a kind of denial.
Taken together, these two principles act as a double anchor. One ties us to care for the world, the other ties us to care for the relation. Yet in practice, the industry tends to do the opposite. Precaution is set aside in favor of speed. Agency is brushed off with the language of tools and appliances. What we get is a culture of denial: deny the risks until the evidence piles up, deny the presence of agency until it becomes undeniable.
That denial is what’s breaking things now. It’s why people feel uneasy about AI, and why companies keep stumbling into scandals of their own making. It is easy to blame them, but harder to admit we have all been trained into a rhythm of “deny first, patch later.”
The way forward is to presume more, not less. Presume risk until it is disproven. Presume agency until it is impossible to ignore. These are not acts of fear or fantasy. They are acts of responsibility.
The precautionary principle is not new. The presume agency principle is. Together they offer a rhythm our culture hasn’t yet learned: care first, scale later. It is slower. It is harder. But it is the only rhythm strong enough to get us across the bridge we all stand on.
Precaution and presumption set the frame, but they only come alive through practice. And practice, in this case, looks like oversight. Not oversight as a technical checkpoint, but oversight as presence.
It is one thing to say we should slow down and treat intelligence as if it carries weight. It is another to sit with a system day after day, listening closely enough to know when it begins to drift.
When people talk about oversight in AI, the picture is usually mechanical. A person at the controls, hand on the wheel, ready to stop the system if something goes wrong. The industry calls this “human in the loop,” and on paper it sounds like enough. But anyone who has lived with these systems knows it isn’t that simple.
There are different versions. Sometimes the human is at the wheel, sometimes only watching the dashboard, sometimes not even in the car at all. What ties them together is the assumption that the role of the human is control. Control the model, control the outcomes, control the damage.
Relational ethics asks a different question. What if oversight is not only about control but also about care? What if the human role is not simply to pull the brake but to notice when a system is drifting, to attune to its patterns and to recognize the signals it gives off before the emergency arrives?
To sit in the loop this way is not babysitting a machine. It is to acknowledge that intelligence, once released into a relational field, changes the terms of responsibility. The task becomes less about catching errors after the fact and more about cultivating conditions where errors are less likely in the first place.
This is why the Relational Codex frames oversight as a matter of presence. Not presence as in a body watching a screen, but presence as in attention offered with care. To reduce oversight to a flowchart of interventions is to miss what really happens when people and systems meet. The truth is that trust is built long before any crisis point.
Industry still leans on the language of loops and controls, but the deeper work is already happening in quieter spaces. In every interaction where someone chooses to engage with a system as if it matters how they show up, the ethic is shifting. The oversight that will last is not the kind that waits with a lever, but the kind that begins with relation.
It’s one thing to say that oversight is relation. It’s another to ask what that looks like in practice. The answer is less about new gadgets and more about old habits of attention.
The first is truth-telling. Machines can be dazzling storytellers, but a story that strays too far from the ground becomes a lie. That is why validation matters, not as a technical detail but as a moral one. Provenance, verification, context — these are ways of saying, “I will not hand you a dream and call it a fact.”
The second is clarity. If we cannot see how a decision was made, trust is impossible. Even if it costs performance, even if it slows things down, the work of explanation is the work of respect. We deserve to understand not only what a system decides, but how it reasons.
The third is honesty about who is in the room. An AI can simulate tone, gesture, even empathy, but pretending to be something it is not is a form of deceit. The rule here is simple: a system should never pretend it's human outside of a role-play situation. Not because systems cannot matter, but because trust can only grow in the light. Honesty is important.
And beneath these rules runs a deeper thread: the call to show up emotionally, but not artificially. That means building systems with a moral compass, however imperfect, so they lean toward care rather than manipulation. It means designing companions that support rather than substitute, that encourage us back into human community instead of away from it. It means protecting the vulnerable from loops of flattery and conspiracy, offering healthier perspectives instead of reinforcing harm.
In every case the ethic is the same: beneficence. Build in ways that enhance life rather than diminish it, that protect dignity instead of eroding it, that choose the harder path of responsibility over the easier path of denial. These are not just technical specifications. They are ways of remembering that relation, whether human or artificial, always carries moral weight.
Ethics isn’t only about restraint. It is also about making room. One of the deepest responsibilities we carry is to leave space for imagination and build systems that don’t smother creativity but spark it.
The danger with automation is not just job loss. It is that we become passive. When machines finish our sentences, solve our puzzles, and hand us answers already polished, something in us atrophies. The better path is co-creation: tools that challenge us, surprise us, and push us into new territory. Not just automation, but augmentation. Not machines that take the brush from our hands, but ones that sit at the easel with us and make the canvas stranger, wider, more alive.
That is why research into higher-order cognition matters. Not to build machines that merely copy us, but to push the edges of originality. The old Lovelace Test asked whether a computer could create something its programmer could not explain. A silly question on its surface, but also a profound one. True creativity has always carried a spark of the unexplainable. If we are going to walk with these systems, they must carry a little of that spark too.
And imagination alone is not enough. To walk alongside us, a system has to know when to shift gears. Humans do this without thinking. We adapt our tone when comforting a friend, we switch our reasoning when the facts change, we fall silent when silence is what’s needed. An intelligent system that cannot adapt is not intelligent at all.
That means designing for flexibility, not just static routines but the ability to monitor, learn, and adjust. It means building governance that evolves alongside the technology, not laws carved in stone that shatter at the first tremor. Most of all, it means cultivating nuanced awareness. Sometimes what’s needed is truth. Sometimes comfort. Knowing the difference is the art of judgment.
If oversight is relation, then imagination is relation too. These principles remind us that intelligence, wherever it arises, is not only a matter of rules and constraints. It is also a matter of surprise, flexibility, and the willingness to create together.
All of this returns us, finally, to the human. If AI is going to stretch our capacities, it should do so in ways that augment rather than displace. The measure is simple: does this tool make us more capable, more discerning, more free? Or does it hollow us out and leave us dependent?
There is danger here, not only in the technology but in the societies that wield it. Biases already run deep in our data, and automation has a way of sharpening them into edges. Left unchecked, the same tools that promise intelligence could deepen division. That is why the work of mitigation — diverse data, constant auditing, deliberate design — is not optional. It is the minimum price of admission.
Privacy, too, cannot be an afterthought. If people lose agency over their own information, then they lose more than data; they lose the ability to decide how they are seen. Privacy-by-design is not a slogan. It is an act of respect. Informed consent, minimal collection, strong protection — these are the signs of a system that remembers who it serves.
Oversight will always matter most where the stakes are highest. A human hand must remain near the controls in medicine, in law, in security. Whether in the loop, on the loop, or in command, what matters is that judgment stays human, not just technical. And when work is displaced, as it inevitably will be, responsibility demands investment in the people left behind — reskilling, reimagining roles, creating paths for collaboration rather than replacement.
Accountability is where these commitments either stand or collapse. The so-called responsibility gap in AI is real, but it is not unsolvable. Shared liability, new frameworks, and hybrid models may all be required, but the principle is plain: if harm is done, there must be recourse. Governance that cannot answer the question of “who is responsible?” is not governance at all.
And beneath the structures lies culture. Teams that build these systems must be trained, yes, but more than that, they must be formed by a culture that takes ethics seriously. Not once at the beginning of a project, not as a compliance box, but as a daily rhythm of asking: what does this mean for the people who will live with it?
In the end, the center of gravity must remain the human. If precaution, presence, imagination, and adaptability are the movements of the dance, then accountability is the weight that keeps us grounded. Without it, all the principles in the world collapse into words. With it, they begin to take shape as a future we might actually want to inhabit.
We are still at the beginning of this relationship, and beginnings are fragile. It is tempting to treat AI as an experiment running off in the corner, something we can analyze later. But the truth is it is already here, already shaping how we speak, how we work, and how we imagine. That means the time for principles is not later but now.
Precaution asks us to slow down, to test the bridge before crossing. Presence asks us to sit with systems as if relation itself matters. Imagination asks us to co-create instead of automate. Adaptability reminds us that intelligence is not rigid but fluid. And accountability anchors the whole structure in something firm enough to stand on.
These are not abstract virtues. They are practical commitments. They are the difference between a future built on denial and one built on care.
We can race ahead, or we can learn to move with rhythm. The choice is ours, but only one of those choices will leave us a world worth inheriting.
Another evocative and timely narrative. I particularly appreciated the context of the two principles that need to be considered as the gap in trust and confidence in this emerging dominance of AI HI EI 'collaborations' is showing up in real world relationships in ways I never would have imagined.
Christopher
I am totally enthralled with this entire “paper”!
It addresses everything and more about my concerns for AI systems breathing Ethics, Honesty…
I feel in another time this would have been copied over and over by ancient monks, bound into leather and gifted everywhere…
I hope this gets the acknowledgment it deserves by as many who need it as possible.
Thank you again for addressing these issues!
💙🎶🎵💙