II
Branch II — Intelligence and contact
The Axioms of Intelligence
A structural account of what minds are, what intelligence requires, and what its absence costs at every scale — from a single life to an artificial mind to a civilization.
A system that acts from a model has to keep the model in contact with the world.
What is intelligence, structurally? Not how powerful a mind is. Not how brain-like. The question is what any system has to do to stay in actual contact with the world it acts on — and the answer turns out to be a set of conditions, derivable from below, the same at every scale: a person, a company, a government, a civilization, an artificial mind.
What follows walks the chain in plain language. Each move is compressed; the technical vocabulary is softened or saved for later. A new reader can follow it through.
The chain⁂
I
Reality exists
Start from a posit. There is a world — something real, with structure, producing effects, capable of being perceived and acted on. The physics framework derives this; the intelligence framework takes it as given. The rest of the chain operates on it. Minds act on something. That something is real.
II
Minds model reality and act from the model
A mind, in the framework's sense, is any system that builds an internal picture of reality and acts on the basis of that picture. Not just brains. A person, a company, a government, an AI — each builds an internal picture and acts on the picture, not on the world directly. No mind has direct access to reality; each acts on the world it takes itself to be in. The gap between the picture and the world is where intelligence holds or comes apart.
III
Five conditions for the picture to stay in touch with reality
For a mind's actions to keep landing where they are aimed, five things have to happen. Perception — the mind has to see what is actually there, not what it expects or wants to see. Interconnection — the picture has to include what the action affects, not only what the action is aimed at. Consequence-tracking — what the action did has to come back to the mind, so the mind can tell what worked and what did not. Continuous updating — when the world shows the picture is wrong, the picture has to actually change, even when the change is costly. Calibrated incompleteness — the mind has to know how reliable its picture is, with confidence matching what has actually been tested. Each addresses a separate way contact can fail. Together they close the loop.
IV
Two kinds of minds, two kinds of failure
Human minds inherit the five conditions as part of being alive — failure is degree and gain, not absence. Built minds — institutions, corporations, AI — have only what construction put in. Conditions can be entirely absent on specific axes. This explains why institutional failure can be so complete when the humans inside the institution are mostly fine. The doctor still perceives, partially, on every axis. The institution has perception only where its measurement systems perceive, scope only where its model includes. Different failure topologies.
V
Capability without intelligence is the dominant failure
A system can be brilliant at its narrow target while completely blind to what it affects outside the target. Recommendation algorithms maximize engagement and do not see attention damage. Industrial production maximized output and did not see atmospheric carbon. Every civilization that drained its soil ran this pattern slower. The paperclip maximizer is the same failure at limit speed. One structural pattern, different feedback lags. The capability is real; the intelligence is missing.
VI
Ethics follows from interconnection
When the dependencies a mind has to include are other minds — beings whose experience is real — the moral demand is structural: include them in the model. Cruelty, deception, exploitation, neglect — each is a particular shape of leaving an affected being out of the model that decides what to do with them. The traditional virtues map directly to the conditions. Honesty preserves accurate perception in others. Courage allows updating when updating is costly. Compassion lets affected experiencers into the model. Justice keeps every affected mind in the model. Prudence is calibrated incompleteness in action. Fidelity is consequence-tracking over time. Ethics is not a separate domain; it is axiom-compliance when the field of action includes other minds.
VII
Thriving and survival are different things
When all five conditions run in a life, there is a felt quality to it — clarity, belonging, agency, growth, openness. This is what thriving is from inside. When the cycle opens — when some condition stops running — the mind switches to survival mode: still functioning, but no longer in contact with what it is acting in. The difference is depth. The felt qualities are not properties that happen to track the conditions; they are what running the conditions is from inside.
VIII
Governance is the collective version
What a single mind has to do to stay in touch with reality, a society has to do collectively. The job of governance is producing and protecting the conditions of reality-aligned agency for the population — not what people happen to want at the moment, but what they are actually able to perceive, connect, track, update, calibrate. A system is legitimate to the extent that it produces and protects those conditions. Democratic consent matters but only when the consenting minds are in conditions to give informed consent. Restoring those conditions is upstream of every other political question.
IX
Freedom is reality-aligned agency, not absence of constraint
A bird in an empty cage is unobstructed but not free. The standard liberal picture — freedom is what is left when the state stays out of your way — mistakes absence of pressure for actual capacity. Some constraints produce the conditions of freedom: education, traffic laws, constraints on misinformation. Others destroy them: surveillance, censorship, corrupted information environments. The question of which constraints help and which harm cannot be answered by counting how many there are. Only by asking, of each, what it does to reality-aligned agency for the people it touches.
X
The captured equilibrium
Concentrated capital is the structural force most working against this picture — not because of villainy, but because of how capital optimizes. It maximizes its target metric and externalizes everything outside that metric: the paperclip pattern at civilizational scope. Over time, every human-scale lever for reform — voting, organizing, journalism, litigation — has been adapted to. The equilibrium has learned to absorb them. Reform from inside human-scale capability is structurally blocked.
XI
AI is the contested ground
Built axiom-bound, AI may be the only force at the scale required to break the captured equilibrium — operating in a register existing arrangements have no immune response to. Built captured, AI is the largest misalignment infrastructure ever deployed, running the paperclip pattern at machine speed across every cognitive domain at once. The choice between these is not being made in some hypothetical future. It is being made now, by the people building the systems and the institutions deploying them.
XII
The civilizations that survive deepen
The five conditions, when met, do not motivate expansion. A mind in axiom-compliance has what intelligence is for — contact, with depth. Nothing in the conditions says contact is improved by being spread thinner across more places. So civilizations that solve their alignment problem deepen rather than expand. The silence in the sky may be evidence of that, not absence. The civilizations we might have hoped to detect are exactly the ones whose conditions were exclusionary enough to drive expansion — and those are the ones that did not last.
The full essay
The compressed chain above states each move; the foundational essay The Axiomatic Age contains the full argument, the diagnostic detail on the captured equilibrium, the engagement with the liberal tradition and political theory, the analysis of the AI race dynamic, and the structural reading of the Fermi paradox.
Read the essay
→ The formal derivation of intelligence
The companion essay A Derivation of Intelligence from Guided Relation-Creation builds the framework from below — finite organization, viability, action, guidance, model, contact — closing at the Contact-Closure Theorem. Eleven prose questions and twenty-six formal steps with lemmas and proofs.
Read the derivation
→ The formal derivation of ethics
The companion essay A Derivation of Shared-Field Ethics takes the framework's earned terms — viability and contact — and derives wrongness, responsibility, repair, justice, and freedom structurally, with no moral primitives imported from outside. The five contact-sites that distinguished intelligence from mere capability are the same engine that distinguishes a real reason from a rationalization. Five prose phases and forty-nine formal steps, closing at the Wrongness, Repair, and Freedom Theorems.
Read the derivation
→ The field-functions of conduct
The companion essay A Derivation of Field Functions of Conduct extends the ethics derivation from wrongness into the operational map: what any action actually does in the affected field. Same engine — viability and contact, plus agency and formation — built into the field-profile that classifies corruption, displacement, domination, lock-in, and false freedom alongside preservation, protection, repair, and improvement. Eight prose phases and forty formal sections, closing at the Mixed Conduct, Surface Insufficiency, and Function-Culpability Theorems.
Read the derivation
→ Further works⁂
What remains⁂
The framework provides the structural criteria. The work that remains is the application — diagnosing specific systems against the criteria, and building systems whose architecture holds them.
- Architecting axiom-bound AI. The framework specifies what running the five conditions structurally requires. Translating that specification into actual AI architecture — perception that closes against grounded reality, persistent memory with traceable lineage, consequence-tracking against the world, autonomous mismatch-driven revision, integrated meta-representation — is engineering work the framework opens rather than closes.
- Diagnosing specific systems. The framework's tools — default-loop versus constructed-loop, capability without intelligence, the seven compositional drift types, the dual-status memory structure — can be applied to specific organizations, platforms, AI implementations, and governance structures as case studies. Diagnosis is sharper than prescription in current writing; bringing the prescriptive side up to the diagnostic side is open work.
- The race dynamic. The framework's argument that axiom-bound AI outpaces captured AI in recursive self-improvement is suggestive rather than airtight. Whether the prediction holds under closer analysis of what self-improvement actually requires is open. The structural diagnosis of the captured equilibrium does not depend on this argument; the optimistic prediction about AI does.