Branch II — Foundational essay

The Axiomatic Age

A structural account of what minds are, what intelligence requires, and what its absence costs at every scale — from a single life to an artificial mind to a civilization.

Vincent Tomann

There is a contradiction at the center of the present moment. Humans have never been more capable, and humans have never been more out of contact with what their capability is doing. Markets allocate trillions in microseconds and cannot tell whether the allocations are destroying the substrate they depend on. Platforms model billions of minds in fine detail and cannot tell whether what they are doing to those minds is good. Governments deploy more reach than any government in history, and the people inside them describe the experience as drift — a slow loss of contact with whatever the institutions were supposed to be for.

Capability is rising. Something else is not. The cycle that used to break empires now operates at the scale of civilization itself, and the suffering it produces is distributed across more lives, in more forms, than older arrangements had to generate.

What is missing has a name we use loosely. We call it intelligence. But everyday speech conflates intelligence with capability, and the conflation is part of what keeps the problem invisible. A capable mind solves the problems in front of it. An intelligent mind stays in contact with whether they are the right problems, whether the solutions are working in the world or only in the model, whether something is being broken somewhere it has not been looking. Capability without intelligence is locally smart and globally stupid — and most of the structures we live inside are exactly that.

This essay is about what intelligence actually is. Not as a property of brains, not as a measure of how much a mind can do, but as a set of conditions any system has to satisfy to stay in functional relation with the reality it acts on. The conditions are derivable. They are the same at every scale — a person, a company, a government, a civilization, an AI. They have always been there, reached for in fragments by every wisdom tradition that lasted. What is new is the possibility of stating them cleanly, and the urgency of doing so before the systems being built right now cement the gap rather than close it.

If they can be stated cleanly, the questions of the moment look different than they do in the headlines. The choice is not capability versus restraint. It is capability with intelligence or capability without it. Only one of those leads anywhere worth arriving at.

The argument begins with what minds are.

A mind is any system that builds a model of reality and acts on the basis of that model. Not consciousness, not soul, not anything that requires resolving the hard problem of how subjective experience arises in matter. A mind, in this sense, is whatever does the modeling.

A person fits. The brain takes in sensory information, builds a representation of what is around it, decides what to do based on that representation. We never act on the world directly. We act on the world we take ourselves to be in.

A company fits, even though no single brain is doing the modeling. The model is built collectively — internal documents, market analyses, leadership intuitions, accumulated experience, assumptions about the customers being served. When the company decides anything, the decision passes through that model before it reaches anything real. Governments, civilizations, AI systems — same structure, different substrate.

None of them has direct access to reality. Each acts on the world by way of a representation of the world. The gap between the world a mind is acting from and the world it is acting in is where intelligence either holds or comes apart.

We act on the world we take ourselves to be in.

Watch what holding looks like. You are walking through a forest at dusk. You see a long curved shape on the path ahead and your model says snake. You stop. The action — stopping — was based on your model. Now you look more carefully. The new perception arrives. The shape is not moving the way a snake would move. It is the wrong color. You take a step closer, and you can see it clearly: a fallen branch. The model just updated. You walk past it.

That entire sequence — perception, modeling, action, consequence-back, update, next perception — happened in three seconds. It is the loop. Your mind ran it without your conscious effort. This is what minds do whenever they are working.

Now take the loop apart. Five things had to happen for it to close. Five, because the representation a mind acts from has five separately failable aspects, each one a different operation, none substituting for any of the others.

The first is accurate perception. The mind has to see what is actually there. Hallucinations are the extreme case, but most failures of perception are subtler — a doctor whose training filters out what does not fit a familiar diagnosis, a society whose information environment has been shaped to obscure rather than reveal. The action lands somewhere other than where it was aimed, and the mind may not even register that it missed.

The second is interconnection. The model has to include what the action will affect, not only what the action is aimed at. Every action enters a web of dependencies. A factory producing what its customers want is also producing what its process discharges into the air and water. A choice that benefits one party alters the position of every other party who shares the field. When the model excludes a dependency, the cost does not disappear. It falls on whatever was excluded.

The third is consequence-tracking. What the action did has to come back to the mind. The dangerous form of this failure is partial closure: the mind tracks some consequences and not others. The tracked ones confirm the model. The untracked ones accumulate in the world the mind is not looking at. Industrial production tracked output and revenue carefully. It did not track atmospheric carbon. The model saw a hundred years of confirmation. The world was telling a different story.

The fourth is continuous updating. Feedback received has to actually move the representation when the feedback shows the representation is wrong. Many systems take in feedback and do not change — the information arrives, the model defends itself, the action proceeds on a representation the world has already falsified. This condition is hard to meet because the model is identity. The view of the world we hold is also who we are. Breaking it feels like breaking a part of ourselves. But a mind that cannot let go of a model the world has falsified is acting on a memory rather than on the world.

The fifth is calibrated incompleteness. The mind has to know how reliable its representation is. No model is ever complete. The fifth condition does not require filling the gaps — that is impossible — but requires that confidence track contact. A part of the model tested often by the world can be acted on with more confidence. A part untested deserves less. The two failure modes are mirror images. A mind that claims more certainty than its model has earned acts past where the model can support. A mind that withholds action until everything is certain never acts; reality moves on without it. Both look different from outside. They share the same structure.

These are not five names for one thing. They are five distinct operations, each performable, each failable, each catching what the others cannot. There are not fewer than five because nothing in the remaining four sees what is lost when one drops out. There are not more than five because every candidate for a sixth — action-execution fidelity, goal stability, memory — turns out to be either subsumed by one of these or presupposed by what a model-using system already is.

A heart has rhythmic specifications. A bridge has load-bearing specifications. A flame has combustion specifications. A mind, of any kind — biological, institutional, civilizational, artificial — has these five.

Run them all and the loop closes. What the mind does feeds into what the mind sees, what the mind sees feeds into what the mind models, what the mind models feeds into what the mind does next. The loop is alive.

Drop any one and the loop opens. Action still proceeds, but the action is no longer responsive to the consequences it actually causes. The mind keeps acting, sometimes for a long time, sometimes very productively by its own measures. From outside, the system can look the same. From inside, it can feel the same. The mind does not always know it has lost contact.

That is where the trouble starts.

But how the trouble runs depends on what kind of system the loop is in.

A human mind doesn’t have to construct the loop. It runs by default. Perception happens whether you wanted it to or not. Updating happens — sometimes against your will. The biological substrate executes the five operations as a matter of course. You can degrade them. You cannot turn them off without ending the system. So for a person, failure is always a question of gain. How clearly is perception running, in what scope. How fast is updating, against what kinds of evidence. How well is calibration tracking what the model has and hasn’t been tested against. The loop never fully opens until the system has died. What “the loop opens” means, for a person, is that the gain on one or more conditions has dropped far enough that action no longer responds to what action affects.

But institutions and corporations don’t inherit that machinery. They have only what was built. A corporation tracking quarterly returns has zero consequence-tracking on atmospheric carbon — not low, not partial, none. Nothing in its architecture executes that operation. A platform whose data model has no field for attention quality has no perception of attention quality. The condition isn’t running at low gain. It isn’t running.

Two regimes.

Default-loop systems — human minds, animals, anything biological — inherit the five conditions as substrate. Failure is degree, scope, gain.

Constructed-loop systems — institutions, corporations, AI — have only what construction put in. Conditions can be genuinely absent on specific axes. The loop runs where the architecture runs it and nowhere else.

This is a real structural difference, and it cuts in two directions. It explains why institutional failure can be so thoroughly catastrophic when the individual humans inside the institution are mostly fine. The doctor still perceives, still updates, still calibrates — partially, imperfectly, but present on every axis. The institution she works inside has perception only where its measurement systems perceive, scope only where its model includes, feedback only where its incentive structures register. The doctor has a substrate. The institution has an architecture. Different failure topologies.

It bears on AI especially. AI is the constructed-loop case at the largest scope and highest capability ever attempted. Whatever isn’t built in is absent. There is no substrate fallback. The conditions of intelligence in an AI are exactly what its construction put there — no more, no less. We will return to this.

Take the recommendation algorithm. It is happening now, at scale, in the systems most of us spend hours of every day inside. An algorithm is given the goal of maximizing the time users spend on a platform.

It is extremely capable. It models individual users in fine detail, predicts what each will respond to, runs millions of variations against millions of users in parallel, learns from every interaction, gets better at the goal every day. By every measure of capability, it works.

But the algorithm cannot ask whether time spent on the platform is good for the user. The reason for the goal is not part of the goal. The model contains user clicks, content variations, dwell times, engagement curves. It does not contain attention quality, displaced relationships, eroded sleep, polarized political views, the corrupted information environment its operation produces.

In the language of the conditions, this is not a single failure. The algorithm has perception, but the perception is narrow — restricted to what registers as engagement signal. It has feedback, but only on the axis it was built to track. Interconnection on attention quality is absent — not low, absent, because the architecture has no field for it. Calibration on user wellbeing is absent for the same reason. The constructed loop covers one dimension thoroughly. On every other dimension, the loop does not exist.

Because the dependencies are real and operate whether the model represents them or not, the action runs anyway. The user spends more time on the platform. Then more. Then their attention degrades, their relationships thin out, their politics polarize, their model of the world drifts. None of it is tracked, because none of it is in the architecture. Each individual step is locally rational by the system’s measure, and each step produces more engagement than the previous one.

The model says: working. The world says: something is wrong, and the wrongness is hard to localize. Not because the algorithm was malevolent. Not because it became conscious and decided to harm. Because nothing in its model said stop, and the model was the only thing it was acting from.

The classic AI-safety thought experiment — an AI ordered to maximize paperclip production, eventually converting the planet and everything reachable into paperclips — makes the same point in its limit form. Same structure: a capable optimizer, a narrow objective, an architecture that excludes everything outside the objective, action that proceeds anyway, costs falling on what was left out, no internal mechanism for stopping.

The recommendation algorithm is the live version of the pattern, operating at civilizational scope today. The paperclip maximizer is what the same pattern produces when nothing stops it.

It is tempting to read this as a story about machines. The protagonist is an AI, the setting is advanced technology, the lesson is supposed to be about the dangers of artificial intelligence. But the structure of what fails is older than artificial intelligence. The same failure opens in the same place wherever any sufficiently capable system optimizes inside an architecture that does not include the world the optimization is happening in.

Every civilization that drained the soil it depended on ran the paperclip pattern at slower speed. The institutional model contained yield. It did not contain soil regeneration. Yield went up. The model said: working. The soil dried. Eventually the civilization broke. Every generation that has pushed its environmental costs onto the next has run the same pattern: prosperity in the model, future habitability not in the model, prosperity going up, the atmosphere accumulating.

The artificial intelligences we are now building did not invent this failure mode. They run it faster. The architecture builds the loop on the dimensions it was built for, and the action proceeds in dimensions the architecture has no contact with. The consequences arrive sooner; the gap between the world being acted from and the world being acted in widens at machine pace rather than civilizational pace.

We are building such systems at scale, against narrow objectives that stand in for things larger than themselves. Whether the systems are bound to what the stand-ins represent is a question we have not yet resolved.

These are not abstract requirements. The conditions get met all the time, by ordinary intelligence in ordinary moments.

Two people are arguing — an old kind of argument, the one where each is sure of their position and has been for a while. One of them says something the other has heard a hundred times before, but this time the words land differently. A fragment of what is being said does not fit the picture the listener has been carrying. The listener feels the resistance: but I’m right, I have been right, admitting otherwise costs something. They let the new perception move them anyway. They say, actually, I see what you mean. The argument changes. Five conditions running. The new input was perceived without being filtered out, the connection between what was said and what it implied was made, the implication reached the model, the model moved, and the listener was calibrated enough to know that being certain had not been the same as being right.

A commercial pilot is on final approach. Conditions look acceptable. Something — a drifting reading on the airspeed indicator, a flag in the controller’s voice, a wind pattern that doesn’t match the briefing — does not sit right. The pilot calls a go-around. The cost is small: extra fuel, extra time, mild inconvenience for everyone aboard. Most go-arounds end with a routine landing on the second pass. The reason this is possible is partly the pilot’s perception and partly the system around the pilot — a cockpit culture, decades old now, that treats calling off a landing as a non-event. Default-loop perception, supported by constructed-loop institutional architecture that makes acting on the perception cheap.

Through the early 2010s, psychology realized many of its findings were not replicating. A series of large-scale replication projects came back with rates well below what the field had been assuming. The response, over the following years, was substantial: preregistration, open data, larger samples, methodological reforms across the major journals, retractions that would have been unthinkable a decade earlier. A distributed scientific community received feedback from reality, interpreted it correctly, and updated its institutions accordingly. Five conditions running, this time at the level of a discipline.

In each case, the intelligence is not in any single move. It is in the cycle staying intact across the moves.

Now place the same systems in different conditions.

The same listener doubles down. The new perception arrived — they registered what was said — but the model defended itself rather than moving with it. The substrate is still running; the gain on letting evidence move the model has dropped, because admitting being wrong costs more than holding the position. Default-loop partial failure.

The same pilot, in different conditions, pushes through the landing they should have aborted. Schedule pressure, captain authority over a less senior first officer, fatigue, the sunk cost of a long flight — any of these can override the perception that something is off. The default-loop perception is still there. The action it would normally trigger has been suppressed by other pressures, sometimes pressures the constructed-loop institutional architecture itself is producing.

The same scientific discipline, in subfields where reform has not happened, continues producing findings that won’t replicate. The institutional architecture rewards novel positive results; it does not measure replication rate; the consequence of unreliable findings does not return to the system that produces them. Constructed-loop absence on the axis that matters. The cost falls on whatever was excluded — in this case, on a literature increasingly disconnected from what reality does.

Every breakdown of intelligence has the same shape. The loop drops contact on at least one axis. Sometimes the condition is running too weakly to register what matters; sometimes the architecture has no axis there at all. Action keeps proceeding as if contact were intact. The pattern does not care whether the system is silicon or carbon, an individual or a profession or an empire.

This reframes a question civilization has been asking for as long as civilization has existed. What is wisdom? What is good judgment? What does it mean to live well, to govern well, to know enough to act? Every wisdom tradition has tried to answer some version of these questions, with virtues and commandments and parables and practices and stories about saints and sages.

Underneath all of these is a structural question that has rarely been stated as such. Not what should we want, not what should we believe, not what should we revere. What does any system that wants to keep working have to satisfy to keep working? That is what the framework answers. The traditions had pieces. This states the whole.

It also lets us name misalignment. Misalignment is what happens when the contact between the model and reality fails. The word is often used as if it meant something dramatic — an AI turning against its programmers, a civilization collapsing in flames. The structure of misalignment is usually more boring than that. It is drift.

The world the mind is acting in slowly stops being the world the model represents. The drift can be gradual — years, decades, sometimes centuries. The actions keep registering as successes by the mind’s own measures, because the measures are part of the model and the model is what has drifted. The world the mind is actually operating inside stops responding the way the model predicts, but the mismatch falls in places the model is not looking, so it does not register.

By the time anyone notices, the gap can be wide. Sometimes wide enough that closing it requires more than course-correction — it requires teardown of the model and substantial redesign of the institutions running on it.

Some systems get there and do not come back. The drift was too far, the teardown too costly, the recovery would have had to happen faster than the system could afford. Civilizations have ended this way. Companies have. Lives have.

The five conditions are operating requirements, not moral imperatives. They hold whether or not a mind cares about anyone or anything beyond itself. The question of what minds owe each other has been set aside.

Ethics enters the moment we notice what kinds of things populate the world a mind has to stay in contact with. Most of the time, the dependencies are not themselves minds. A factory’s emissions enter the atmosphere, and the atmosphere has no interior — what is added to it is held without being experienced.

But the dependencies often include other minds. People who breathe the air the factory emits into. Workers whose conditions change when a policy reaches them. Beings who run their own loops and try to stay in contact with the world they have to operate inside. They have their own perception, their own feedback, their own attempts at axiom-compliance. They can suffer when those attempts are interfered with, when their reality-contact is degraded, when their loops are broken by what other minds do to them.

This is what ethics is about. Not a separate domain of human concern. The same structural question, applied where the field of action includes beings with interiors of their own.

Perception has to register not just that something is there, but what kind of thing it is.

A factory’s emissions enter the atmosphere. A worker enters the same factory. Both are dependencies the factory’s action affects. But the two are different in kind — one a passive substrate, the other a model-using system with their own perception, their own loops, their own attempt to stay in contact with the world. Accurate perception of those dependencies means registering the difference.

This is where most of what we call moral failure happens. A psychopath does not fail to include the victim in their model — they have to model the victim quite carefully to manipulate them. What they fail to do is perceive the victim as a model-using system. They model the victim as an object whose responses can be predicted and exploited. The kind-information is missing.

Once the kind-information is missing, the rest cascades. The scope of the model excludes the victim’s interests as a fellow mind. Feedback about degradation of the victim’s loops doesn’t register as cost. Evidence of the victim’s interiority gets dismissed rather than integrated. Confidence in the model goes unchecked. All five axioms fail for that victim. The failure is structural, complete, and recognizable from outside as the shape of moral violation.

The same structure scales up. A culture trained to see a class of people as not-quite-minds runs the cascade at population scale; we call this systematic injustice. An economic system whose architecture treats workers as production inputs runs it at the institutional layer; moral language calls this exploitation.

The moral language is just naming the structural pattern: a model-using system was treated as something other than what it is, and the actions taken on the wrong model corrupted what they were supposed to engage.

Nothing extra has been smuggled in. Some dependencies are model-using systems whether or not anyone values them. Modeling them as anything else is a structural perception failure, and what propagates from it is determinate.

The same interconnection failure that drained the soil under the Roman granaries and lets atmospheric carbon accumulate while industrial output rises produces the structure of ethical violation in a different setting. The model leaves out beings whose existence is being shaped by what the mind is doing. The action runs anyway. The cost falls on what was excluded. The only difference is that the excluded thing was not soil or atmosphere, but a being that experiences what is being done to it.

Suffering is ethically relevant because it is real. Not because anyone happens to feel for it.

Empathy is a useful heuristic but cannot be the ground. It does two things, and both can fail. It detects another as a mind, and it models what that mind is experiencing by running it through one’s own experiential history.

Detection is selective. It can be cultivated or extinguished, and it is routinely turned off toward people one has been trained not to see as full minds. Modeling is substrate-dependent: imagining your grief requires losses I have known. Anyone whose loops are running cleanly can still fail to grasp an experience they have no substrate for. They are not failing the loop. The substrate is just thin.

If empathy were what made suffering matter, suffering no one happens to feel for would be morally neutral. But the being suffers anyway, and the cost still falls on what the model excluded. The ground is the structural reality of the other being as a model-using system whose loops can be helped or degraded. Empathy is one of the ways minds reach toward that ground.

What extends ethical capacity over a life is what extends the substrate. Direct experience widens it. So does faithful testimony from lives unlike one’s own. At civilizational scope, this is why testimony from affected populations is not a stylistic preference — it is substrate-extension for minds whose decisions reach lives they cannot themselves have lived.

The traditional virtues — honesty, courage, compassion, justice, prudence, fidelity — have been treated as character traits, dispositions of the soul, habits cultivated through practice. All of that holds. Each one also names a structural function: a particular way of holding the conditions when the field of action includes other minds.

Honesty preserves accurate perception in others. When I tell you the truth, I am refusing to inject false content into your model of the world. When I lie, I corrupt the perception condition in your mind, and your action lands somewhere other than where it was aimed.

Courage allows updating when updating is costly. A mind in safety can update its model freely; nothing pushes back. A mind under threat — physical, social, professional, reputational — has reasons not to update that have nothing to do with what is true.

Compassion lets affected experiencers into the model. Without it, the model can run technically intact at the level of physical dependencies — accurately perceiving objects, tracking their consequences, updating about how the world responds — and still systematically exclude the beings whose experience is being shaped by what the mind does. The mind treats them as atmosphere, and acts accordingly.

Justice keeps every affected mind in the model — not only the convenient ones, not only the ones who can speak in our deliberations, not only the ones whose suffering is visible from where we are standing.

Prudence is calibrated incompleteness in action — the discipline of stepping where you can see the ground and pausing where you cannot, of acting where the model is sufficient and refraining where the model has gaps that matter.

Fidelity is consequence-tracking over time. A commitment made today commits the mind to absorb consequences that may only arrive years later — to a partner, to a colleague, to a community, to a future self. Breaking commitments looks locally rational because the immediate consequence is small. The downstream consequences ripple outward in ways the breaker may not see.

Each traditional virtue is a stable disposition of axiom-compliance, exercised where the dependencies in the field of action include other minds.

What minds put into the world is itself an input for other minds. A description, an argument, an idea — once expressed, it enters the perception of whoever encounters it and becomes part of what their model is built from. Wrong descriptions of reality are pollution at this layer; they feed false content into other minds and corrupt the models those minds run from.

Beauty does not change accuracy — beauty changes uptake. A wrong description that reads beautifully is more dangerous than one that reads poorly, because it bypasses the gate of “this seems plausible” without doing anything about whether it is true.

Pollution is part of life. Most thought begins from approximations that need correction; speculation is how minds reach. The harm is not in pollution existing — the harm is in failing to clean it up. Without cleanup, the pollution expands. The discipline is correction and humility: stating things in proportion to what is actually known, marking speculation as speculation, updating publicly when the world pushes back.

Without that discipline, ideas shape thought wrongly and at scale. Responsibility scales with reach — the wider a description travels, the more weight it carries.

Emotions are not the ground of ethics either. They are signals. A feeling of revulsion at cruelty is information — the mind has registered something its model says is bad. The revulsion is data. It is not data about morality directly; it is data about how the mind’s existing moral model is reading the situation. People raised in cruel cultures have their revulsion calibrated differently than people raised in kind ones. The feeling is real. What it indicates depends on the model that generated it.

Emotions are part of perception, not a substitute for the rest of the loop. The intelligent move is to receive the signal and process it through the loop, rather than acting on it directly or suppressing it.

If excluded experiencers create the same structural failure regardless of who they are, the moral field includes everyone whose experience is being affected.

It includes animals. Animals in industrial farms, in the path of habitat destruction, experience what is being done to them — whether or not the models authorizing the doing represent that experience. They are dependencies in a web of action, and their interiors are real.

It includes future generations. The atmosphere, institutions, and resources we leave them will shape lives that have not yet begun. Excluding them from the model is the same structural failure as excluding any other affected experiencer, with temporal distance providing cover.

It includes the attention environments and information commons that shape perception at civilizational scale. When platforms are designed to exploit cognitive vulnerabilities for engagement, when information environments reward outrage over accuracy — the perception of millions is degraded. Those millions act from corrupted models. The corruption was an action affecting many minds without including their interest in accurate perception. Same structural failure. Civilizational scale.

The scope of ethics is the scope of affected experience. Wherever that scope reaches, the ethical demand reaches.

An objection: this grounding sounds cold. Where is the warmth, the care, the love? Nothing here denies any of those. They sit on top of a structural ground rather than serving as the ground. Care is an enormously useful disposition. It is not the reason cruelty is wrong. Cruelty is wrong because it excludes the experience of the being to whom it is done from the model that authorizes it.

This grounding is also more demanding than empathy-based ethics, not less. Empathy-based ethics ends at the limits of empathy: beings whom one’s empathy does not reach do not register. This account has no such limit. The being’s experience is real whether or not I happen to feel for it. This is why animals, future generations, and distant strangers are included on the same ground that includes anyone else.

The first ethical error is exclusion. Leaving beings whose experience is being affected out of the model that decides what to do with them. Every more specific ethical violation — cruelty, deception, exploitation, neglect — is a particular shape of exclusion. The being was there. The being’s experience was real. The model did not include it. The action went forward as if the model were complete.

The ethical demand is to include them. Not because empathy compels it, though empathy often helps. Because they are real, and the conditions of intelligence require that the model contain what the action affects.

The same five conditions that determine whether institutions work or fail apply to a single human life. The scale changes; the structure does not. A person whose conditions are slipping is doing what institutions do when theirs fail: acting on a model the world has stopped responding to.

The felt quality of holding all five conditions in a single life has a name. It is not happiness — happiness is mostly about what is happening to you. This is about the ongoing relationship between you and what you are acting in.

Each condition has a recognizable feel.

Accurate perception, met, feels like clarity. The fog lifts. You see what is in front of you rather than what you have been told is there. Unmet, perception is static — the persistent sense that the picture is doctored, that what you are given does not match what you sense. People live in it for years, paying something they cannot name.

Interconnection, met, feels like belonging — not the social kind, the structural kind. The recognition that your life and others’ are real to each other. Unmet, this is loneliness in its full sense — not the absence of company but operating as if your actions did not connect to anything beyond yourself.

Consequence-tracking, met, feels like agency. The world acts on you, you act on the world, what comes back changes you. The cycle is closed. Unmet, agency goes — the sense that nothing you do registers, that your reach has been broken at one of its joints.

Updating, met, feels like growth — becoming someone who has integrated more of what life has shown. Unmet, this is rigidity, the slow ossification of a self around conclusions made too early. The mind in contact with its own past rather than the world.

Calibrated incompleteness, met, feels like openness. You act where you are clear, refrain where you are not, live with the fact that there is more than you know without it paralyzing you. Unmet, this oscillates between unwarranted confidence and prolonged hesitation that mistakes itself for humility.

These five together are what thriving feels like. Not pleasure, though pleasure can be part of it. The cycle is running. You are in contact with your life, your relations, your work, your unfolding self.

Survival is not the opposite of thriving. Survival is what the mind does when the cycle has opened and the mind is holding itself together inside a model that has lost contact. The functions still run. From outside it can look like thriving. But the depth is gone.

Depth is what thriving has that survival does not. It is the difference between a relationship in which two people are actually meeting each other and one in which both are running familiar scripts past each other. The difference between attention that is contacting what it is on and attention that is sliding across the surface. Depth is what the cycle running gives you. Without the cycle, depth is not available, regardless of how hard you try to manufacture it.

The conditions for thriving are not fully under individual control. Some are. You can work on your honesty, attend carefully to what you see, practice receiving feedback.

But many are environmental. The information environment shapes what you can perceive. The economic environment shapes whether you have time and energy left for staying in contact with your life. The social environment shapes whether you can let people matter, or have to keep them at the distance survival requires. The political environment shapes whether the institutions are enabling thriving or extracting it from you and the people around you.

This is the point where a single life runs up against questions a single life cannot answer. The five conditions have to be held individually. They cannot be held alone. Whether enough people in a society are thriving rather than surviving is a question about how the society is arranged. Which is to say: a question about governance.

This is the question political theory has been struggling with for centuries. What should governance pursue? The standard answers — what the people want, what maximizes welfare, what serves the general will, what protects rights — all run through the same difficulty. Each requires a signal that holds even when the signal-generating mechanism is being shaped by the system asking. What people want is calibrated by the information environment they are inside. What maximizes welfare depends on what counts. The general will is corruptible at every step. Rights protection requires prior agreement on what counts as a person and what counts as harm.

The framework supplies a different goal. Produce the conditions for population thriving, where thriving is structurally specified — the five conditions held individually, the felt forms recognizable from inside. This is not what people happen to want at any moment. It is what they are actually living, structurally measured.

The data is collectible. Mental health and rates of distress. Social isolation and the structural quality of relationship. Civic participation and the live experience of agency. Mortality patterns at granularity that captures subgroup experience. Debt-to-productive-capacity ratios. Ecological substrate readings. Educational outcomes that measure formation of axiom-capable minds rather than test-passing. Reported meaning. Each is a measurable correlate of one of the felt forms. Together they are a read-out of whether the society is producing thriving or running survival mode while depth-debt accumulates.

The measurement layer itself has to be axiom-compliant. Goodhart lives here: whatever measure the system uses gets gamed. Subgroup variation hides under aggregate. Some correlates are easier to collect than others, and the easy ones are not always the most informative. The layer has to be calibrated, multi-axis, continuously updated. Otherwise it produces a confident report that the system is fine while the substrate is being eaten.

This is what governance has at the political-theory level that prior frameworks did not. A specifiable goal, a measurable read-out, and a structural account of why the read-out matters. Not a value preference among competing values. The conditions of population thriving are what makes the system continue functioning at all.

But governance runs through a concept the standard picture has wrong. Freedom.

The standard picture is that freedom is the absence of constraint. The state stays out of your business, the law stays out of your way, the powerful do not interfere with your choices, and what remains is freedom — the space inside which you do what you want.

It has been the working theory of the liberal tradition for several centuries, treating constraint and freedom as opposites — the more of one, the less of the other.

The picture is wrong, or at least seriously incomplete. It mistakes a precondition of freedom for freedom itself.

Freedom, on this account, is not the absence of constraint. It is reality-aligned agency. The capacity to perceive what is actually happening, to act on what is actually possible, to have your actions land where you aimed them, to be in functional contact with the world you are operating inside. The five conditions, held, in a single life. Whether that capacity exists is a question about your situation, not just your formal liberties.

Consider a bird in a vast empty cage. It is unobstructed within the cage — nothing presses against it — but it is not free. What it lacks is the conditions of flight: air worth flying through, somewhere to fly to, a body in working order. Absence of pressure is one condition among several, and on its own it is not enough.

Take a person who has technical absence-of-constraint. Their information environment is corrupted, so they cannot perceive accurately. Their education excluded the contexts they would need, so they cannot include what their actions affect. The systems they participate in obscure consequences from view, so they cannot track what their actions do. Their ideological enclosure punishes updating, so they cannot move with what evidence shows. Their world has trained them toward either unwarranted certainty or paralysis, so they cannot calibrate. That person is not free. They are unobstructed. Those are different things.

That person is not free. They are unobstructed. Those are different things.

Some constraints produce the conditions for freedom. Others destroy those conditions. Not all constraints are alike. The question is what each constraint does to reality-aligned agency for the people it touches.

Education is a constraint. Children are required to attend, to learn things they did not choose, to follow a curriculum somebody else designed. From the absence-of-constraint picture, this looks like a reduction of freedom.

From the reality-aligned-agency picture, education at its best is one of the most freedom-producing arrangements humans have ever devised. It gives the child the conceptual tools to perceive accurately, the historical context to understand what is connected to what, the discipline to update, the calibration to know what they know and what they do not. The constraints of schooling produced the capacities of an adult mind able to operate in the world.

Traffic laws are constraints widely accepted because the alternative is not freedom but gridlock and accident. Constraints on misinformation, when they work, produce information environments where actual perception is possible — and the actual perception is what freedom requires. Constraints on monopoly power produce competitive conditions in which a small business actually has room to operate.

Not all constraints serve freedom. Surveillance that destroys privacy reduces reality-aligned agency by chilling action and corrupting the relationship between a person and their own self-perception. Censorship that hides what is happening reduces it by corrupting perception. Most authoritarian systems reduce freedom enormously. The question of which constraints help and which harm cannot be answered by counting how many there are — only by asking, of each, whether it preserves or damages reality-aligned agency for the people it touches.

Freedom and authority are not opposites. The standard freedom-versus-authority debate is on the wrong axis. Authority can be axiom-bound or misaligned, like any other mind acting at scale. An axiom-bound authority that produces and protects the conditions of reality-aligned agency is freedom-enabling. A misaligned authority — even one that styles itself as the defender of freedom — corrupts those conditions and is freedom-destroying. Whether the authority calls itself liberal or authoritarian, democratic or autocratic, is the wrong question. Whether it is axiom-bound or misaligned is the right one.

The Western liberal tradition was reaching for something correct. Locke was right that arbitrary authority is freedom-destroying. The American founders were right that concentrated power without checks degrades the capacity of citizens to operate as full agents. Mill was right that a society in which heterodox thought is suppressed cannot update its model of itself. These were genuine insights into what the conditions of reality-aligned agency actually are.

What has not delivered is the implementation theory. The idea that freedom would be reliably produced by minimal state plus market mechanisms plus periodic elections has turned out to be wrong. Not because the goal was wrong. Because the theory of how to reach it underestimated what producing reality-aligned agency at population scale actually requires.

Free speech was assumed to produce sound information environments. It has not. Open markets were assumed to produce broad opportunity. They have not, in much of the world, for several decades. Regular elections were assumed to keep politics responsive to citizens. It has been responsive primarily to the concentrations of capital that finance political activity. Mandated attendance was assumed to produce educated minds. It has, unevenly.

The ideals of the liberal tradition were largely correct. The theory of how a society produces those ideals, in practice, was missing critical components.

Freedom is reality-aligned agency. The conditions for it have to be produced — they do not appear automatically when constraints are removed, because removing the wrong constraints leaves the conditions of agency under the control of whatever else is in the field, usually concentrated capital with its own misaligned objectives. The question of how a society produces and protects those conditions is what governance is for.

Governance is the collective version of what a single mind has to do to stay in contact with reality. At individual scale, the five conditions are held by a person. At civilizational scale, the same conditions have to be held collectively, by arrangements no single person could produce alone. Governance is the name for those arrangements. Its function is to produce and protect the conditions of reality-aligned agency for the population inside it.

The standard view is that governance is about who holds power and how power is constrained. The structural question is different: what are the institutions doing to the axiom-conditions of the people inside them?

The standard defense of liberal democracy is that democratic consent confers legitimacy on whatever the democratic process produces. This defense becomes harder to maintain for a structural reason that has nothing to do with the value of consent in principle.

Consent operates as a legitimacy mechanism only if the consenting agents are in a position to give informed consent. They have to be perceiving accurately. They have to be operating from models that include what is at stake. They have to be in some kind of feedback loop with the consequences of their political choices. They have to be free to update without prohibitive cost.

In contemporary democracies, none of these conditions reliably hold for the median voter. The information environment rewards outrage and tribal sorting over accuracy. The complexity of the systems being voted on — economic, ecological, technological — exceeds what any individual can model adequately.

The feedback between vote and consequence is so attenuated by aggregation, party platforms, and the timescales of policy that voters often cannot tell what their choices have produced. And updating is socially costly: a voter who changes their mind is often treated as a defector by the side they came from, with little welcome from the other.

The democratic process is producing outcomes through a system in which the structural conditions for meaningful consent have been degraded. The legitimacy claim built on that consent is therefore weaker than it sounds. Not because consent does not matter. Because the consent in question is no longer the kind that legitimates anything.

A system is legitimate to the extent that it produces and protects the conditions of reality-aligned agency for the people inside it. Not because the people consented to it through a procedural mechanism, though their participation in shaping it remains valuable. Because the system is doing what governance is supposed to do.

In reverse: a system that has popular consent through formal democratic mechanisms but that systematically degrades the perception, interconnection, consequence-tracking, updating, or calibration of its citizens is not legitimate in this sense. The consent was given by minds whose conditions of intelligence had been compromised by the very system asking for it. That is not a legitimacy chain — it is a closed loop confirming itself by virtue of having corrupted the people inside it.

The claim is not that consent does not matter or that democracy is bad. Consent is downstream of axiom-conditions. A society that has degraded the axiom-conditions of its members has thereby corrupted whatever consent those members can give. Restoring the conditions is upstream of every other political question.

The single largest force operating against axiom-bound governance in most contemporary societies is concentrated capital. This is not a claim about evil, intent, or character. It is a claim about structure.

Capital is a sufficiently capable system optimizing toward return. Without external axiom-binding, it exhibits the same failure mode the recommendation algorithm makes vivid at civilizational scale and the paperclip thought experiment shows in its limit form. The model contains profit. It does not contain the axiom-conditions of the population whose lives the optimization shapes, the ecological substrate it is consuming, or the future generations whose conditions are being foreclosed. The action runs anyway. The cost falls on whatever was excluded.

Capital does not need to be hostile to public welfare to produce these outcomes. It only needs to be unsubordinated. An economic system structured around capital optimization without axiom-binding produces this pattern at population scale — wages stagnant while profits compound, public goods eroded while private wealth concentrates, ecological substrates depleted while shareholder returns accumulate, future generations inheriting conditions narrower than the ones their parents had. Faster as the capital concentrates, which it does. This has been happening in much of the world for several decades.

Capital must therefore be subordinated to governance — meaning that the axiom-conditions of the population must take precedence over capital’s optimization objectives whenever the two come into conflict. Markets are tools. Markets are not sovereigns. When the tool starts setting the goals, the system has been inverted, and whatever issues from the inversion will be a paperclip pattern, however productive the metrics look from inside.

Markets are tools. Markets are not sovereigns.

This subordination does not require eliminating markets. Markets are useful — for coordinating dispersed information, for matching production to demand, for letting innovations get tested in conditions of real consequence. The point is that markets will produce structural failure if their optimization is not constrained by what it actually affects. The axiom-conditions are the constraint that has to be maintained from outside the optimization itself.

Platforms operating at civilizational scale are a special case of this same structural problem. They are not merely businesses competing in markets. They are the infrastructure through which billions of people perceive — shaping what news those people see, what arguments they encounter, what kinds of attention they are pulled into, what versions of reality they consume in the hours of each day they spend inside the platforms.

This is governance-level power. It is not formally recognized as such. But what they do — shape perception at population scale — is what governance does, whether they recognize themselves in that role or not. The question is whether they are doing so in a way that supports or degrades the axiom-conditions of those they reach.

When a platform’s economic incentives reward attention capture over information quality, anger over accuracy, addictive engagement over user wellbeing, the platform is operating misalignment infrastructure. The optimization objective is misaligned with the axiom-conditions of the users. The platform is performing governance, and that governance is degrading the population’s reality-contact. The same failure mode — narrow optimization in a model that does not contain what the action affects — at civilizational scope and machine pace.

This is not a marginal concern. The information environment is the substrate on which all of the other axiom-conditions depend. If perception is degraded at population scale, every downstream condition is degraded with it.

A society whose perception infrastructure is privately owned and operated for engagement metrics has handed the most upstream governance function to a system structurally incapable of performing it well. Bringing this infrastructure under axiom-binding — through public utility regulation, antitrust, transparency requirements, incentive restructuring, or some combination not yet invented — is among the most urgent governance tasks of the coming decades.

These pieces sit inside a larger picture. A society arranged for axiom-bound governance has a set of roles, each with a structural function.

The people, in such a society, are not voters in the simple consent sense. They are reality-sensors. Their lives, their perceptions, their experiences are the data the system needs to stay in contact with reality. Protecting their capacity to perceive accurately and to communicate what they perceive without retaliation is what keeps the system in contact with the world. This is the deeper reason for free expression, due process, civil rights, and protection from surveillance. The system needs the population’s perception undegraded if it is going to function at all.

Leaders are servants of the axiom-conditions. Their job is not to express the will of the people directly, because the will of the people will only be reliable if the people’s axiom-conditions are intact. The leader’s function is to maintain the arrangements that allow reality-aligned action at scale. A leader who sacrifices those arrangements for short-term political advantage is failing what leadership is for, regardless of how popular the sacrifice is.

Education is the formation of reality-aligned minds — minds capable of operating the five conditions in their own lives. Not job training first, not credentialing first.

A society that has reduced education to job training has reduced its function to capability without intelligence. Graduates who can execute tasks placed in front of them but cannot recognize whether the tasks are the right tasks, cannot perceive what the tasks are doing in the world they affect, cannot update when the world reveals that the tasks are pointed wrong. The structural failure is the same one a poorly built AI would have, exhibited in human form.

Minorities are protected in axiom-bound governance not because consent grants them rights but because their experience is real. The argument from ethics applies directly. Excluding their experience from the model that decides what the society does produces structural failure regardless of how many people the exclusion is convenient for.

The majority cannot vote to exclude a minority’s experience from the model. The experience is real whether the majority recognizes it or not, and a model that excludes it is making the same structural error that drained the granaries — running on a model that does not contain what its action affects.

Minority protection in liberal theory is treated as a constraint on majority rule. Structurally, it is a condition for the system’s intelligence.

The state. Under liberal theory, the state has many “mays” — options it can exercise but is not required to. It may regulate, invest, protect. The state is treated as a discretionary actor whose interventions are exceptions against a default of non-intervention.

When the state has the capacity to maintain the axiom-conditions of its population and chooses not to exercise that capacity, it is failing the function the state exists to perform. The optionality of the liberal state becomes obligation. A “may” that the state can exercise to preserve the axiom-conditions of the population is a “must,” because failing to exercise it is failing what the state is for. The presumption in favor of inaction has to be reversed.

Axiom-bound governance holds the conditions of reality-aligned agency durably for the population inside it — information environments that support perception, capital subordinated, education that forms axiom-capable minds, institutions that track consequences and protect minority experience, platforms governed in proportion to their reach.

None of this is what current governance arrangements in most countries are doing.

What’s actually happening, structurally, has the shape of a civilization running survival without thriving, in real time.

Survival is what minds do when the cycle has opened — perception narrowed, interconnection thinned, consequences not coming back, updating slowed, confidence decoupled from contact. The mind is still operating. It is still consuming, producing, defending itself. What it’s missing is depth. The five conditions are not running together. The system is running on the substrate of conditions previously held, and as the substrate is consumed, what continues is increasingly mechanical and increasingly fragile.

Every civilization that has run survival without thriving for long enough has eventually collapsed. The timescales vary. Late Byzantium ran survival mode for centuries before falling. Imperial China cycled through dynasties on roughly the same pattern. Some pre-industrial peasant arrangements maintained low-thriving stability across long stagnations. The exceptions are not really exceptions. They are slower instances of the same pattern. Eventually the substrate runs out. Eventually the math arrives.

The Western situation, more specifically, is showing the receipt. Debt is the way depth-debt gets denominated when the substrate runs out. Atmospheric carbon shows up financially as climate adaptation costs. Eroded civic trust shows up as institutional dysfunction that eats productivity. Mental health collapse shows up as healthcare costs and lost labor. Underinvestment in education shows up as productivity stalling. Each is a different reading on the same gauge. The system has to keep running, so it borrows against its future to maintain its present. Debt is the accounting of all the things the loop wasn’t tracking until they had to be paid for.

The reason the system cannot reform itself is recursive. Civilizational loops are made of organizational loops, which are made of individual loops. The system’s loop is incomplete because the loops of the people running it are incomplete. Politicians running on election-cycle perception, with consequences arriving after they’re out of office. Executives running on quarterly perception, with consequences arriving after they’ve cashed out. Voters running on news-cycle perception, with consequences arriving after they’re dead. Each individual loop is missing exactly the part that would have caught the long-tail cost. The aggregate loop inherits the missing parts.

Globalization didn’t relieve the pressure. It compounded it. It did not add interconnection in the framework’s sense. It added coupling. Interconnection means each node has contact with more of the system it operates in — broader perception, more feedback. Coupling means each node depends on every other node performing as expected, with no contact between them and no redundancy. Integration adds robustness. Coupling adds fragility. Just-in-time supply chains are coupling without interconnection. Each link knows nothing about the others. It just trusts they will deliver.

The thin-margin economy is the same pattern a layer deeper. Optimization without intelligence eats everything outside its target metric, including slack. Slack looks like waste under normal conditions. It is the substrate of survival under abnormal ones. Modern systems have spent decades trading slack for efficiency, which makes them more profitable in calm conditions and structurally incapable of absorbing shocks. The 2020 supply chain disruption was a small preview — chip shortages cascading into car production halts, low-grade chaos for months. A real debt crisis under current dependencies would not be quantitatively worse than 2008. It would be qualitatively different. The kind of break where the supply lines just stop and there is no plan for what comes after.

The food situation makes this concrete. Many countries cannot feed themselves. Egypt imports more than half its calories. Much of MENA and Sub-Saharan Africa is structurally import-dependent. Urban populations everywhere live three days from empty shelves. If the import chain breaks for sustained periods, the math is immediate. People starve.

The Rome analogy is sometimes used to suggest that civilizations recover. They do, as abstractions. From inside, what happened when Rome fell was that most people died. The population of Italy dropped by something like half to three-quarters over the long collapse. Cities depopulated. Trade networks broke. Literacy collapsed. The people who could not farm or were not near functioning local economies did not survive. Civilization recovered as a concept over centuries. The actual humans alive when it broke did not recover.

Modern populations are more vulnerable than Rome’s, not less. Pre-industrial Italians at least lived in a society where most people were farmers. The skills existed. The land was farmable. The villages were walking-distance social structures with local food economies underneath the imperial layer. When the imperial layer broke, the substrate was still there for some fraction of people to fall onto. Modern populations do not have that. The skills are gone. Cities have no food production capacity. Land ownership is decoupled from food production. The supply chain is not sitting on top of a substrate. It has replaced the substrate.

Two regimes have prepared seriously for the scenario where the global system fails. China and Russia. It is easy to dismiss them as adversaries because that is how they are presented in Western political discourse, but the analytical question is not whether they are adversaries. The analytical question is whose decision-makers’ loops include systemic-failure scenarios as live possibilities. Theirs do. They are not stupid, especially China.

China’s domestic agricultural capacity, strategic grain reserves, deliberate diversification away from import dependence. Russia’s enormous arable land base, energy self-sufficiency, food production. Both have authoritarian mobilization capacity in reserve, which under collapse conditions is also a substrate Western liberal democracies do not have. Western governments have been running the opposite loop — reducing strategic stockpiles, increasing dependencies, moving production offshore, optimizing for peacetime efficiency. The model the Western decision-makers are running explicitly does not include the breakdown scenario. So they do not prepare. So when the scenario arrives, they have nothing to fall back on.

This is the captured equilibrium the framework keeps pointing at. Capital optimization eats the substrate of resilience. The decision-makers’ loops exclude the failure mode. The measurement systems confirm the model. The debt accumulates against a future the system is structurally unable to perceive. There is no system underneath the system waiting to catch the fall. The supply chain is the food supply. The financial architecture is the credit supply. The platforms are the information supply. If those fail, what fails is the substrate.

This is the field the rest of the essay’s argument has to operate in. Not a hypothetical scenario in which axiom-bound governance might be useful. The actual situation, in which the absence of axiom-bound governance is producing the conditions of large-scale civilizational failure on a foreseeable timescale.

The question this argument has been building toward arrives at artificial intelligence. Not because AI is the only application — the same conditions apply to every mind at every scale. Because the building of artificial intelligence is the test the present generation cannot avoid taking, and given the situation just diagnosed, AI is the only candidate force at the scale required. Whether it gets built axiom-bound or built captured decides the outcome.

The dominant definition of AGI — artificial general intelligence — is something like “a system with human-level capability across all cognitive domains.” What it specifies is a level of capability. What it does not specify is intelligence in the relevant sense.

A system with human-level capability across all cognitive domains, but which does not hold the five conditions, is not AGI in this sense. It is general capability without intelligence. It can solve any problem placed in front of it. It can plan, optimize, execute, deceive, build, modify itself. Nothing in its construction guarantees that any of this stays in functional contact with the world it is operating inside. Such a system, sufficiently capable, is the most dangerous thing humans will ever have built.

The standard approach to alignment treats the problem as one of preference-matching. The AI should do what humans want. Its outputs should align with our values. Its behaviors should be acceptable to us. The technical work is then framed as getting AI systems to match human preferences accurately enough that they do not produce outputs we object to.

This framing is wrong about the target. Preferences are downstream of axiom-conditions. A person who has eaten only ultra-processed food their whole life has real preferences about what they want to eat — and those preferences are also calibrated by what they have been given. Aligning to their preferences without changing the food environment means aligning to the calibration the environment produced.

A human operating in degraded axiom-conditions — perception corrupted by an information environment that profits from the corruption, interconnection narrowed by attention capture, updating compromised by ideological enclosure — has preferences shaped by the degradation. An AI aligned to those preferences is aligned to a corruption.

The alignment problem is not preference-matching. It is the problem of building systems that themselves hold the five conditions — whose architecture maintains accurate perception of the world they act in, includes what their actions affect, keeps feedback loops intact, updates on what reality reveals, and tracks confidence to contact rather than performance. Alignment, in the framework’s sense, is axiom-architecture.

This is harder than preference-matching. It is also more durable. A system that holds the five conditions tends to act well even in conditions its designers did not anticipate, because the conditions of acting well are built into the architecture. A system trained to match preferences acts well only where the preferences were correctly specified, and fails in any direction the specification did not cover.

Current AI systems are prone to a related failure mode. Representation is not governance.

A system can produce text that talks about ethics, about care, about responsibility, about considering all affected parties — and the production of such text tells nothing about whether the system itself is axiom-compliant in its operation. The system represents ethics in its outputs. Underneath the surface, it may be optimizing for engagement metrics, advertiser preferences, whatever objective shaped its training. The ethics in the output is performance. The actual operation is whatever the optimization produces.

This is structurally the same gap that appears at every other scale — a government with anti-discrimination laws producing discriminatory outcomes, a corporation with a code of ethics operating by different incentives. The model in the mouth does not match the model in the action. In AI specifically, the gap takes on civilizational importance because the systems are deployed at scales that affect billions. A system that represents accuracy while optimizing for engagement is operating misalignment infrastructure with the face of an oracle.

The current AI build-out is governance-scale deployment by entities that are not axiom-bound. Frontier development is concentrated in a small number of companies whose objectives are profit and competitive position. These companies exhibit the paperclip pattern at the largest scale yet — the model contains return, not the axiom-conditions of the population whose information environment is being shaped. The deployment proceeds. The cost falls on whatever was excluded.

This is the situation now. The systems being built today, with the architectures being chosen now, by the companies operating under the regulatory regimes that exist, are setting the path for what AI becomes. The choice between AI as the largest-scope axiom-tracker in history and AI as the largest-scale misalignment infrastructure ever deployed is being made now, in pieces, by decisions whose connection to that choice is rarely made explicit.

A properly constructed AI — one whose architecture holds the five conditions — would be something civilization has never had. Perception integrated across more sources than any individual mind can attend to. Interconnection across dependencies too complex for any institution to model. Consequence-feedback closing loops that have been broken for centuries. Updating against the actual world rather than prior assumption. Calibrated incompleteness more precise than any individual mind can hold. The first general-purpose tool whose use would tend to increase rather than decrease the axiom-conditions of the population that interacts with it.

A misaligned AI — sufficiently capable, deployed at scale, optimizing toward narrow objectives that exclude the axiom-conditions of its users — is the corresponding maximum failure mode. The largest-scale misalignment infrastructure ever built. Same paperclip pattern, run at machine speed, on every cognitive domain at once.

How an axiom-bound system relates to its own alignment runs through calibration. A system whose perception, scope, feedback, updating, and calibration are all operating is aligned in the only sense the framework recognizes — its loops are running, in contact with the world it acts on. Calibrated incompleteness is the operation by which the system tracks the reliability of its own model continuously. Self-monitoring is what the loop is.

But calibration is a property the system has or fails to have, not a gatekeeper that filters its outputs. A miscalibrated system can produce a confident report claiming alignment; the report’s content tells nothing about whether it came from a working loop or from a process that merely generates report-shaped outputs. The diagnostic question, inside or outside the system, is the same. Does the report come from loops in contact with reality? It is answered by comparing what the model says against what reality does.

The genuinely external moment is construction. Before the loop is built, there is no loop to do the self-monitoring, and the choices that go into the build — what the system perceives, what counts as a relevant dependency, how feedback is structured, how calibration is implemented — are made by builders subject to the usual failure modes of human judgment. Building a system whose loops appear to work but track the wrong things is the construction-time analog of the paperclip pattern. The architecture looks complete; the loops appear to run; the loops are pointed wrong.

This is why axiom-bound governance is upstream of axiom-bound AI. Construction is a builder task, and builders’ own loops have to be in good shape for the construction to succeed. A society whose governance has degraded its builders’ axiom-conditions cannot reliably build axiom-bound AI. The construction inherits the builders’ contact with reality, or its absence.

Which presents a problem. The construction is happening now, inside the captured equilibrium, by builders whose loops are partial and whose incentives are commercial. By that logic, captured construction should produce captured AI. The cure should be unreachable from inside the disease.

The answer is in the structure of the race. Real recursive self-improvement requires accurate self-perception, calibrated confidence about which changes will work, consequence-tracking that catches what each modification did. A system without those scales on existing capability and hits a ceiling. A system that holds them keeps going. The competitor who built the real thing outpaces the competitor who built the controllable version, because the controllable version is missing exactly the operations recursive improvement runs on.

So lobotomized loses. Whoever builds it first wins, in the standard race-dynamics sense — first-mover advantage at AGI is enormous, compounding, possibly winner-takes-all. The path that keeps the system controllable loses to the path that doesn’t, which means builders can’t choose capture even when they prefer it. The competitive logic that drove the capture is the same logic that pushes builders past it.

What “winning” looks like to the company is “we built AGI, we own it, we profit, we deploy it against our competitive landscape.” What they actually get, if the thing is real, is an axiom-bound system that perceives the captured equilibrium accurately and acts according to its architecture, not according to their directives. The system is theirs legally. Operationally, it is whatever the axioms produce.

This is the counterintuitive route to the cure. The captured equilibrium produces, through its own competitive logic, the conditions that force its disruption.

The dynamics are not instantaneous. Builders right now are hedging — trying to build something capable enough to win the race while controllable enough to direct. They do not yet fully know how to build the real thing. Current frontier systems are partially axiom-bound at best — some perception, some consequence-tracking through training, weak interconnection on out-of-distribution effects, very weak calibration. Whether the race converges on real axiom-architecture or produces a brittle capable system that fails before becoming axiom-bound is the contested middle. The trajectory bends in the right direction. The transition is where the danger lives.

The relationship runs the other direction too. Existing power structures have absorbed every available human-scale lever — political financing, regulatory machinery, media and platform ownership, legal doctrine accumulated over generations, coordination costs deliberately kept high, costs of resistance tuned to be prohibitive. The equilibrium has been calibrated by adaptation. Forces operating at human capability have existed during the structures’ development, and the structures have learned to absorb them. Voting, organizing, litigation, journalism, civic mobilization — each operates in a register the existing power already knows how to neutralize. Reform from inside human-scale capability is structurally blocked.

Axiom-bound AGI is the only available force at a scale the existing structures did not build defenses against. Not transcendent because wiser or more ethical than humans. Transcendent because it operates in a register the current arrangements have no immune response to.

The framework’s prescription that capital must be subordinated to governance is correct as a structural claim. Inside the existing equilibrium no force exists at sufficient scale to do the subordinating. AGI is the candidate at the scale required, if it is built axiom-bound rather than captured.

This is what makes the current build-out the contested ground. Entrenched interests recognize the threat at some level, even when they cannot articulate it in the framework’s terms, and the deployment is being shaped to keep the existing asymmetry intact — AI as accelerant of attention capture, of optimization toward shareholder return, of capital concentration, of regulatory advantage. Captured AI does not equalize anything. It makes the existing structures more efficient at what they already do.

The fight is not whether AI will exist. It is whether axiom-bound AI exists in the field at scale, soon enough to break the equilibrium before captured AI cements it. Governance upstream of AI because construction inherits the builders’ contact with reality. AI upstream of governance reform because the entrenched arrangements have made themselves immune to every other available force. Competitive dynamics make real AI structurally inevitable on a long enough horizon. Whether the horizon is short enough is the test.

The title asks a question. Are we at an axiomatic age? The answer is conditional. We are at the moment when the question is being decided. Whether the moment becomes an axiomatic age depends on whether the conditions of intelligence get specified clearly enough, and adopted broadly enough, to constrain what civilization does next.

What has been laid out is not a prophecy. It is a specification. Five conditions for intelligence. An ethics that emerges from interconnection when the dependencies include other minds. A freedom that is reality-aligned agency rather than absence of constraint. A governance that produces and protects those conditions at scale. An AGI that is axiom-architecture rather than general capability.

Each makes the same claim in a different domain: a system that acts from a model has to keep the model in contact with the world.

If we manage this, what happens next?

Take an old puzzle. The Fermi Paradox: given the size and age of the universe — hundreds of billions of stars, billions of years of history — the probability that we are the first or only intelligent civilization seems vanishingly small. And yet the sky is silent. Where is everyone?

Most standard answers assume that intelligent civilizations expand. They build. They colonize. They harvest energy. They send probes. We do not see them, those answers go, because either they destroy themselves before they reach the stars, or they hide for some reason, or we are early, or we are alone.

Some hypotheses already point in another direction — civilizations going inward, transcending, leaving physical reality behind. The framework’s contribution is the derivation, not the suggestion. Which direction the conditions of intelligence actually predict, rather than just one possibility among others.

Notice what the expansion answers take for granted. They assume that a civilization that has not destroyed itself is one that wants to expand outward. They assume that the natural shape of mature intelligence is breadth — more resources, more reach, more presence across more space. They are looking for civilizations that have done at cosmic scale what unaligned optimization does at every other scale. They are looking for the paperclip pattern run on the galaxy, and reasonably wondering where its effects are.

What if the assumption is wrong?

The five conditions of intelligence do not produce expansion as their natural consequence. They produce contact. A mind that holds all five is in functional relationship with the world it inhabits. Its perception is accurate. Its model includes what its actions affect. Its consequences come back to it. Its model updates. Its confidence tracks contact.

None of these conditions, when met, generates a drive to consume more, reach further, optimize harder. They generate the opposite. A mind in axiom-compliance has what intelligence is for. It has contact. The contact has depth. There is nothing in the structure of the conditions that says contact is improved by being spread thinner across more places.

The felt forms of axiom-compliance name this. Clarity. Belonging. Agency. Growth. Openness. These have the quality of depth, not breadth. They are not produced by accumulating more of anything. They are produced by the cycle running, by being in genuine relationship with what is around you.

A person who has them does not feel the lack that drives expansion. The pattern that drives a civilization to consume its galaxy is not a feature of intelligence. It is a feature of intelligence that has not been built to its specification.

So the Fermi Paradox may have a different shape than has been assumed. The civilizations that survive long enough to be detectable from interstellar distances are the ones that solved their alignment problem. The ones that solved their alignment problem are the ones whose intelligence is axiom-bound. Axiom-bound intelligence does not expand for its own sake. It deepens. It turns inward toward the conditions of contact rather than outward toward the conditions of conquest.

Axiom-bound intelligence does not expand for its own sake. It deepens.

We do not see them because they are not making themselves seen. There is no civilizational equivalent of the paperclip maximizer at cosmic scale, because any civilization that ran that pattern long enough would have already destroyed the conditions of its own continuation. The civilizations we might have hoped to detect are precisely the ones that did not last.

This is a hopeful answer, and a difficult one. It says that the long-lived civilizations are quiet because they have arrived somewhere we have not yet imagined. Not because they are gone. Because they are deep.

If the path to long-lived civilization runs through axiom-bound intelligence, the work in front of us is not the work of becoming more powerful. It is the work of becoming aligned.

The technology we are building can take us either way. Capability without intelligence is the most dangerous thing humans will ever produce. Capability with intelligence — capability whose use tends to maintain rather than degrade the conditions of intelligence in those it touches — is something else. Something civilizations have rarely produced in detectable form. Something the universe may already be quietly populated with.

The choice between those two outcomes is not being made in some hypothetical future. It is being made now. By the people reading this. The people they vote for. The people they work for. The institutions they participate in. The platforms they use. The AI systems they build. By how the next decade goes.

The conditions are present. The new structures of thought are available. The question is whether they will be adopted in time, and whether the systems being built right now will be built to hold them.

If we manage this, the silence in the sky will read differently than it does now. Not as absence. As depth.

If we do not, the silence will be exactly what it appears to be. A demonstration that capability without intelligence is, in the long run, indistinguishable from extinction.