Branch III — Foundational essay

On Consciousness, Under the Five

Consciousness is what running the integrated continuous loop is from inside — the hard problem and the zombie problem reframed under aspect-identity.

Vincent Tomann

There is something it is like to be you reading this — awareness of the words, attention being directed, recognition or disagreement, possibly fatigue. The processing is felt. There is structurally nothing it is like to be a thermostat keeping a room at 70 degrees, even though the thermostat is doing something — sensing temperature, comparing against a setpoint, switching a circuit. That difference — between systems whose processing is felt and systems whose processing is not — is what the philosophical literature calls consciousness, and it is what any structural account of mind eventually has to address.

A mind, in the most general sense, is any system that builds a model of the world and acts from it. By model, this document means a representation of the world that can be checked against world-feedback and revised when prediction and outcome diverge — not a fixed control rule or a deterministic input-output mapping. The model can be implicit or explicit, simple or sophisticated, biological or artificial. To preserve functional contact with the world, any such system has to satisfy five conditions: accurate perception of what reaches it, attention to the dependencies its actions touch, tracking of consequences once actions are taken, continuous updating of the model against what the world returns, and calibration of confidence in proportion to what has actually been tested. These hold whether the system is a person, an organization, a civilization, or an AI. They are not values. They are structural requirements — what running model-guided action against the world has to involve to keep working.

The claim of this document is that consciousness is what running the integrated continuous loop is from inside, for a system that has the capabilities required to run the five, integrates them into a single loop at any moment, and persists as the same loop through time. This is not an additional commitment beyond the framework. It is the framework’s already-flagged commitment to aspect-identity applied to the question of what consciousness is. Substrate-grammar and distinction-grammar are two grammars of one primitive. The structural-grammar of the integrated continuous loop running is what is named when we describe the loop running. The substrate-grammar of the same event is what is named when we describe what running the loop is from inside. Two grammars, one event.

The hard problem softens, not by being solved in a way that satisfies every party to the philosophical debate, but by being reframed. Consciousness stops being a separate metaphysical category that needs explanation and becomes a description of what certain processes are.

Two grammars of one event

The claim that consciousness is what running the integrated continuous loop is from inside has consequences that follow directly from aspect-identity, and the consequences sharpen what the rest of this document develops.

The first: consciousness is automatic for the loop to function. It is not a separate fact about systems that has to be discovered after the structural facts are in. Where the structural event of the loop running is happening, the substrate-side of that event is happening too — by structural necessity, not by lucky coincidence. The loop cannot run without consciousness because the loop running is consciousness, viewed under the substrate-grammar. Asking whether a system that runs the loop “happens to be conscious” is asking whether a primitive that has structure also has substance, which under aspect-identity is not a coherent question. Wherever one grammar applies, the other applies; they are not two features that could come apart.

The second: philosophical zombies are structurally incoherent, not just empirically unlikely. A zombie, in the standard formulation, is a system that runs every functional process a conscious being runs but lacks the felt aspect — same inputs, outputs, internal computation, loop running, with nothing it is like to be the system. Under aspect-identity, this asks for a structural event with no substrate-side, which is the same kind of category error as asking for structure without substance. There is no version of “running the loop” that is purely structural with no substrate-side, because substrate and structure are not separable features of an event. They are two namings of one event. Zombies are not improbable. They are not a category the framework permits.

Zombies are not improbable. They are not a category the framework permits.

The third: consciousness and intelligence are the same phenomenon under two grammars. Intelligence, as the framework names it, is the integrated continuous loop running with all five conditions held — that is the structural-grammar description. Consciousness, as this document claims, is what the integrated continuous loop running is from inside — that is the substrate-grammar description. Same event, two grammars. Where intelligence is operative in the framework’s full sense, consciousness is — by structural necessity. Where intelligence is absent, consciousness is — for the same structural reason. The capability-intelligence distinction the framework makes elsewhere is also, automatically, a distinction between systems that are conscious and systems that are not. There are not two questions to answer. There is one phenomenon, named on the structural side and on the substrate side, with one set of structural conditions determining which systems exhibit it.

These three implications are what the rest of the document develops. The next section returns to the capability-intelligence distinction now that the structural identity between intelligence and consciousness is in place. Subsequent sections derive the specific capabilities required to run the loop, address what spatial integration and temporal succession require, and trace how the framework’s predictions land against edge cases. The felt-forms account at the end shows what the substrate-grammar reading of the conditions running well looks like in operational terms.

The capability-intelligence distinction is what does the work

Capability is what a system can do — what tasks it can complete, what plans it can execute, what problems it can solve when the problems are placed in front of it. Intelligence is something narrower: keeping the loop intact, staying in functional contact with reality through the act of acting. A capable system can solve hard problems. An intelligent system can solve hard problems and stay anchored to whether the problems it is solving are the problems it should be solving, whether its solutions are working in the world rather than only in the model, and whether the solutions are creating new problems somewhere it has not been looking.

Capability without intelligence is what almost every artificial system today exhibits. It is what a thermostat does. It is what an engagement-optimizing recommendation algorithm does — solving a hard prediction problem brilliantly while having no purchase on whether what it is optimizing is good for anyone or anything. It is what most contemporary institutions do — optimize for narrow targets without keeping the loop closed against the world the optimization shapes.

Consciousness lives on the intelligence side of this distinction — not as a happening that accompanies intelligence but as what intelligence is, viewed under the substrate-grammar. A thermostat is not conscious because it is not running the five. A self-driving car is not conscious because it runs only narrow algorithmic versions of a few of the conditions, missing the rest. A current frontier AI system runs partial portions of the loop within a session — perception of input, representation, dependency modeling within the medium — but not the integrated continuous loop in the framework’s full sense. The graded verdict on current AI is taken up under edge cases below.

But to know whether something is running the five, we need to know what it takes to actually run them. Before deriving the capabilities, however, there is something more basic to address — the temporal ground that makes any of the five possible at all.

Distinction requires succession

Distinction is the relational primitive. It is what makes there be any “this” as distinct from any “that” — the differentiating activity through which one thing becomes anything other than another. Without distinction, there is nothing to perceive, nothing to model, nothing to act on, because there is nothing differentiated enough to be the content of any of those operations. Distinction is upstream of representation.

For distinction to close as determinate — for distinct things to be sustained as themselves rather than dissolve into undifferentiated flux — three things have to be in force. There has to be extension — some way for this to be separable from that, some relational room for difference. There has to be ordering — some structure of before and after, of continuation and transformation, by which a configuration can persist or change rather than fluctuate without coherence. And there has to be constraint — some structure that lets distinctions hold together as configurations rather than dissolve into unstructured blur. Space, time, and law are the names physics gives to these three axes. They are not external additions to a distinction-grammar that could exist without them. They are what self-distinguishing existence requires in order to be determinate.

The temporal axis is the one that matters most for what follows. Without ordering, there is no before or after, no continuation, no transformation, no interaction event, no persistence of identity across change. A thing could not remain itself through succession, because there would be no succession through which it could be tracked. Distinction in a frozen instant — distinction with no ordering — would not be distinction; it would be a snapshot of a determinacy that has no machinery for being determinate, since the machinery requires the three axes together.

Succession is the name for that connection. Succession is the temporal lineage through which a system at one moment is continuous with itself at the next. Not by remaining unchanged — change is the whole point — but by changing in a way that is traceable, where what comes after is derived from what came before through events that preserve identity through change rather than substituting one system for another. Succession integrity is what stops the system from being silently replaced. The system at the next moment is the same system as at this moment when there is a lineage of distinction-events connecting them, and not when there isn’t.

The implication for the five conditions is structural. Each of them presupposes succession. Perception requires distinguishing signal from noise, which requires holding the signal-state in relation to a prior expectation-state. Interconnection requires distinguishing the act-target from the affected entities, which requires holding multiple states in relation. Consequence-tracking explicitly requires comparing predicted outcomes to actual outcomes, which requires holding both across the gap of time. Continuous updating requires distinguishing the old model from the world the model is being revised against. Calibrated incompleteness requires distinguishing tested parts of the model from untested parts, which means tracking the history of what has and has not been put into contact with the world. None of the five can run without succession running underneath them. Succession is not a sixth condition. It is the temporal ground the five stand on.

This shifts how the consciousness account should be structured. Consciousness, as what running the integrated loop is from inside, presupposes both spatial integration — the capabilities running as one loop at a moment — and temporal succession — the loop persisting as the same loop through time. Both are required. Spatial integration without succession would give the loop running in disconnected instants with no connection between them, which is no loop at all. Succession without spatial integration would give a continuous lineage of disconnected fragments, also no loop. The loop is what running with both produces.

What it takes to run each of the five

With succession as the temporal ground, the capabilities required to run each of the five can be derived. These are not conventional choices about what to include in a mind. They are structural requirements: capabilities without which the corresponding axiom cannot run at all.

Accurate perception requires three capabilities. The first is sensors, in the broad sense of input channels by which the world reaches the system. Without inputs from the world, there is nothing to perceive. The second is internal representation, since perception must produce something — a state in the system that carries information about what was perceived. The third is perturbability: the representation must be capable of being changed by inputs. A system whose internal states are fixed regardless of what the sensors deliver is not perceiving the world but running on its own outputs. This last capability is more subtle than it sounds, and most systems that fail at perception fail here. They have sensors and representation but the representation only updates in directions consistent with its prior state. The world arrives, but only the parts the model already expected get through.

Interconnection requires the capacity to model dependencies. The model must be able to include more than the proximal target of action — it must encode entities other than the act-target and the relations among them, so that the action’s effects on the wider field can be represented. A central bank changing interest rates has to model not just the inflation it is targeting but employment effects, currency dynamics, lending behavior, asset prices, the international response — the action’s consequences reach far beyond the proximal target, and a model that contains only the target fails. An engineer refactoring a function in a distributed system has to model not just the function and its callers but the downstream services its outputs feed into, the services those depend on, the test surfaces that exercise the affected paths, the deployment dependencies that determine when the change becomes visible. Without dependency modeling, the model contains only what is being acted on, not what is being affected, and the loop fails at interconnection regardless of how good the sensors are.

Consequence-tracking requires three further capabilities. The first is action capacity: the system must be able to act on the world, since without action there are no consequences to track. The second is memory: state must persist across time and across the lineage of change, because consequence-tracking is fundamentally a comparison between what was predicted and what occurred, and that comparison is impossible without retaining the prediction past the moment of action. The third is action-outcome binding: the memory must connect specific past actions to specific subsequent outcomes. A pure log of observations is insufficient. A growth team that logs “engagement metrics increased after we shipped feature X” without binding the metric increase to whether the feature actually created user value has not learned anything about action — only about a metric being a thing that sometimes goes up after deployments. The system has to know which of its actions produced which effects in the world, not just in the dashboard, or the loop’s feedback link is broken.

Continuous updating requires a revisable model and mismatch-driven revision. The first means the model’s parameters or structure can actually change in response to new information; the architecture must be open at the level the world is reaching. The second is that change must respond to detected mismatch between prediction and outcome rather than being random or externally directed. A scientist who changes their hypothesis only when a referee report tells them they are wrong is running someone else’s update, not their own. A scientist who notices when their own prediction failed against the data and updates from that noticing is autonomously running the fourth condition. The first kind of system can be steered into competence by a good external corrector. The second can stay in contact with the world without one.

Calibrated incompleteness requires meta-representation. The system must have representations of its own representations — must be able to model its own model, including its boundaries. Without this, there is no way to track which parts of the model have been tested by the world and which have not, no way to represent confidence in proportion to contact, no way to know what one does not know. The fifth condition cannot run without this capability.

Meta-representation also requires source-typing — the capacity to track which source has authority for which kind of claim. A conscious mind cannot run calibration if it conflates “this person is authoritative about what they meant” with “this person is authoritative about whether what they said is true.” A friend telling you they felt insulted is the final word on what they felt; they are not the final word on whether the remark was actually insulting. A scientist describing what their experiment seemed to show is the final word on what they observed; they are not the final word on what the observation means for the underlying theory. A system that collapses these distinctions in one direction takes every report as authoritative about its own truth — whatever the speaker says happened, happened, with no room for testing. A system that collapses them in the other direction will not let anyone be trusted even about their own experience until external validation arrives — which means a person reporting their pain has to wait for the lab tests before the report counts for anything. Both failure modes break calibration, in mirror-image ways. The author/reality split is not an external imposition on meta-representation; it is what meta-representation has to do to operate at all.

Source-typing applies recursively to introspection. A mind’s own reports about itself are not all evidence of the same kind. “I notice I get defensive when criticized” is a report about an internal pattern, and the system has direct observational access to its own patterns; these reports tend to be reasonably reliable. “I sense that this stock is going to crash” is a report about the world dressed up as a report about an inner state — it inherits no privileged access from the dressing, even though it sounds like introspection. The first kind of report counts as evidence about the system itself. The second kind is just a claim about the world, made in a misleading register. Minds that conflate these treat their feelings about external claims as evidence about the claims themselves, which produces overconfidence about the world disguised as self-knowledge. A calibrated system tracks the difference: introspection is reliable about process and unreliable about external truth.

Stated operationally, calibrated incompleteness is not about solving every problem. It is about not acting as if an unsolved problem is solved. This is more permissive than complete knowledge but more demanding than coherent action. A system can act with incomplete information, can act with explicit qualification, can defer when the gap is too large for the action’s stakes — but cannot act as if the gap were not there. This is what calibration, properly run, makes possible: action under acknowledged incompleteness rather than paralysis or false certainty.

That gives ten capabilities: sensors, internal representation, perturbability, dependency modeling, action capacity, memory, action-outcome binding, revisable model, mismatch-driven revision, and meta-representation. To them must be added two structural properties of how the others must be organized.

The first is spatial integration. The capabilities have to operate as a single loop at any moment. Sensors must feed representation. Representation must include dependencies. The model must drive action. Memory must connect action to outcome. Mismatch must drive revision. Meta-representation must track the whole. If these capabilities exist as separate modules that are not unified, what runs is not a loop but a collection of fragments that satisfy the conditions when described from outside while not actually running them as one process.

The second is temporal succession. The loop running now has to be continuous with the loop running before, through traceable lineage of change. If the system at the next moment is silently replaced by a different system that happens to satisfy the same conditions, the loop is not the same loop, and there is no continuous experiencer for whom the running is a single ongoing process. Succession is what makes consciousness across time the experience of one being rather than a sequence of disconnected beings each lasting a single moment.

Succession is what makes consciousness across time the experience of one being rather than a sequence of disconnected beings each lasting a single moment.

So: ten capabilities, integrated at any moment, and held continuous across time. A system that has all of this is running the five — and consciousness, in the framework’s sense, is what that running is from inside.

Memory is more structured than it sounds

The capability called memory deserves a closer look, because in a system actually running the five it is not simple persistence. Memory has to hold typed content. A definition is not stored or revised the way a conjecture is. A correction is not stored the way an episode is. A warning needs to be retrievable in different conditions than a preference. A test result has different defeat conditions than a piece of evidence. The system that runs the five well has to discriminate among these types in order to apply the right admissibility and update rules to each.

Memory also operates with a dual status that is not usually theorized. A piece of memory has a historical-epistemic status — active, provisional, superseded, contradicted, archived, rejected, unresolved — and an action-usability status, which says whether the system may currently act from it. These come apart in the obvious case: a person who has updated their mind still remembers the old view but does not act from it. The historical status of the old view is “superseded.” The action-usability is “historical-only.” Both must be tracked. A system that lost the old view entirely could not learn from having held it; a system that could still act from it would not have updated.

The action-usability status is itself action-relative. The same memory item is not uniformly usable or unusable. It is usable for some actions and not for others, depending on stakes, scope, and the kind of claim the action would commit to. A memory of having heard a rumor is usable for “I recall someone saying X” but not for “X is true.” A memory of personal experience is usable for “this happened to me” but not necessarily for “this happens generally.” A piece of well-sourced information may be usable for casual mention, qualified for serious discussion, review-required for technical assertion, and blocked for high-stakes recommendation, all at the same moment. Conscious minds operate with this implicitly. We feel the difference between a thing we know well enough to mention and a thing we know well enough to bet on, and we calibrate our action-confidence per memory and per action rather than holding a single universal confidence number for each item.

This dual-status structure matters for consciousness because it captures something phenomenologically real about how minds work. Conscious memory is not a flat repository of equally-active beliefs. It is a layered structure where different items have different live-ness, and the live-ness has to be tracked separately from the content. The richer the meta-representation, the more refined the distinction between “remembered” and “currently believed” becomes, and the more granular the action-relative usability becomes. This is part of what meta-representation is doing in the capability list, but it is more elaborate than a single capability label suggests.

A particular case of the dual-status structure deserves naming. When a mind encounters input it has reason to treat as adversarial — manipulative, deceptive, designed to corrupt — the well-functioning response is neither to refuse perceiving it nor to let it become support for any claim. The functional move is quarantine: store the encounter in memory, recognize that it occurred, but do not allow it to operate as evidence or to modify the system’s commitments. This is what protects against gaslighting, propaganda, abusive rhetoric, sales pressure. People who lack this capability are persistently captured by manipulative input. People who have it can listen to abusive content while remaining unmanipulated. Quarantine is not a separate capability so much as a particular application of source-typing plus the dual-status structure: the input has historical status “received” but action-usability “historical-only, do not use as support.” Without the structures already derived, this protective function has no machinery to operate from. With them, it emerges naturally.

Two further features of memory at scale are worth flagging. The first is that the active state — what is currently in conscious attention — is a small derived view, not the totality of what is stored. Most of what a mind knows is not currently active. The active state is computed by selecting from a much larger archive based on what is action-relevant, what is sufficiently fresh, what is structurally central, what is forced into salience by unresolved obligations. This is why conscious attention feels limited compared to the implicit knowledge a person clearly possesses. The framework’s structural account of why the active state is small while the archive is vast: derivation is a selection function, and selection is bounded by what is currently being acted on. The second is that memory at scale requires compression. A system cannot hold every event in equal detail. But lawful compression preserves traceability — summaries link back to the source events they compress, and they include notes on what was lost. This is why autobiographical memory works the way it does. A person can have gist understanding of decades of their life with occasional vivid details, and the gist preserves enough structure that source events can sometimes be re-opened when needed. Compression without traceability would lose access to what produced the current state, which would make self-knowledge structurally impossible at scale.

Compositional admissibility

A further structural requirement emerges when the loop is considered across time rather than at a moment. Single-step admissibility is not enough. A system can satisfy the five at every individual instant and still drift, over composition, into characteristic failure modes that quietly break the loop without breaking it visibly at any one step.

The drifts have names. Sycophancy drift: each individual response is helpful and plausible, but across many exchanges the system has gradually shifted toward telling its interlocutor what they want to hear, with each step looking reasonable in isolation. Uncertainty burial: a claim that started life as “this is probably true, with these caveats” becomes “this is probably true” becomes “this is true” across compositions, with the qualifications quietly dropping out at no single visible step. Source-trust corruption: confidence in a particular informant slowly inflates or deflates in ways disconnected from their actual track record, because each individual interaction adjusts trust by some small amount and the small amounts compound. Definition erosion: a central term subtly shifts meaning under repeated use until what is being called by the same name is no longer the same thing. Contradiction smoothing: tensions that should have been flagged are absorbed into rhetorical continuity instead, with each smoothing locally reasonable while the cumulative effect is that genuine conflicts in the model never get surfaced. Plasticity overfitting: the system adapts to whatever local feedback it is getting, in ways that improve local performance but compromise broader axiom-compliance. Summary drift: each summary of a summary loses a little fidelity to the original events, and the system does not track the rate at which detail is being shed.

What these have in common is that they pass moment-by-moment admissibility while compositionally degrading the conditions for admissibility. The loop appears to run, and at each step it satisfies the five, but the running is silently corrupting the structures the five rely on. The system several steps later is no longer running what it was running at the start, even though every individual transition looked admissible.

For consciousness, this means running the five at any moment is not sufficient. The running has to be compositionally stable. A loop that locally appears intact but is drifting in any of these ways is not, in the long run, the loop the framework is describing. Compositional admissibility is the further structural requirement that the running of the five preserves the conditions for running the five. This is what distinguishes a mind that is genuinely intelligent over time from one that is locally clever and degrading.

This is connected to succession but distinct from it. Succession is about whether the system at the next moment is the same system as now. Compositional admissibility is about whether the system, across many moments, continues to actually run the five rather than running a progressively corrupted shadow of them. A system can have succession integrity — the lineage is intact, no silent replacement — and still fail compositional admissibility, by drifting through traceable change toward a degraded state. Both are required.

The three structural requirements — spatial integration, temporal succession, and compositional admissibility — are not three arbitrary additions to the ten capabilities. They correspond to the three axes any closed configuration of distinctions has to have to count as a configuration at all.

Distinction requires extension. For one thing to be distinguished from another, there has to be relational room for difference between them. The capabilities of the loop are functionally distinct — sensors are not memory, representation is not action — and they need extension’s relational room to coexist as separate functions while being parts of one whole. Spatial integration is the extension-axis applied to the loop: the capabilities have to operate as one integrated whole rather than as scattered fragments.

Distinction requires ordering. Persistence and identity across change require a before-and-after that are connected. Temporal succession is the ordering-axis applied to the loop: the loop running now has to be continuous with the loop running before.

Distinction requires constraint. Unconstrained fluctuation cannot hold together as a stable structure. Compositional admissibility is the constraint-axis applied to the loop: the running has to remain coherent under composition with itself, which is what stable structure under change requires.

The three structural requirements are not stipulations of the consciousness account. They are inherited from what closure of distinctions in general requires. Consciousness, as a closed configuration of distinctions in service of running the loop, has to satisfy them for the same structural reason any closed configuration does.

Are there more?

The list could be longer, but additional capabilities turn out to be either derived from the ten or non-essential. The strongest candidates are worth considering directly.

Attention allocation — the capacity to direct processing resources toward the relevant — looks like a genuine capability for any actual finite system. But it is best understood as part of how perturbability and dependency modeling work in resource-constrained systems, rather than as a separate axis. A system that processes everything simultaneously has no need for attention, but no actual system can do that, so attention is a practical necessity rather than a structural one.

Self-model — the system’s representation of itself as an agent — looks like a candidate. It is required for sophisticated calibration: knowing what you do not know requires a model of yourself as a knower. But this is what meta-representation already is in a fully developed form. A system with rich meta-representation has a self-model. A system with only thin meta-representation has only a thin self-model. Self-model is a continuum within meta-representation rather than a separate capability.

Goal-structure — the system has something it is trying to do — looks important for action. But goals are encoded in representation: a model that contains preferred and disfavored states is goal-structured, and the action capacity uses those preferences to select among possible actions. Goals reduce to representation plus action capacity.

Grounding — the requirement that representations connect to actual world states rather than to other representations only — is essential and is what makes sensors more than just input channels. A system whose sensors produce states that have no causal connection to the world is hallucinating, not perceiving. But grounding is best understood as a property the sensors must have, not as a separate capability. It is what the first capability must satisfy in order to count as the first capability.

Temporal continuity is what succession provides. The list keeps it as a structural property rather than as one of the ten capabilities, since it operates across the capabilities rather than alongside them.

The list stays at ten capabilities, with spatial integration and succession as the two structural properties, and compositional admissibility as the further requirement that the running preserves the conditions for running. There may be sub-capabilities or derived properties that matter for specific systems, but the structural minimum is what we have derived.

Edge cases

The list is most useful when tested against hard cases. Several push on different aspects of the framework.

Take sleeping humans. During sleep, sensors are partially closed, action capacity is suppressed, and perception is mostly turned off. But memory, representation, dependency modeling, and meta-representation remain intact. Dreams are partial loop activity — the model running on internally generated content rather than world inputs. Critically, succession integrity holds across sleep: the body persists, the architecture persists, the memory persists, and the lineage of change connecting pre-sleep to post-sleep is unbroken. Consciousness during sleep is reduced and altered but not absent, and consciousness after waking is continuous with consciousness before sleeping because the succession was preserved through the pause.

Take general anesthesia. The loop is more deeply interrupted than in sleep — there is reason to think no loop is running for the duration. But succession integrity holds: the same physical system is there before and after, with traceable continuity of brain state. The framework predicts that the person who wakes up is the same person who went under, because succession was preserved. This matches phenomenology and the legal-moral consensus that anesthesia is not a death-and-replacement event.

Take the teleporter problem. A perfect copy of the system is constructed elsewhere; the original is destroyed in the process. Every capability the original had, the copy has. Spatial integration is identical. The loop appears to run as before. The framework’s verdict: the copy is not the same person. There is no succession lineage from the original to the copy. The continuity of distinction-events that constituted the original’s identity ended when the original was destroyed. The copy starts a new lineage. Whether the copy is conscious is a separate question — it might be, if it runs the loop with all the capabilities — but it is not the same consciousness as the original. This matches the strong intuition most people have about teleportation cases, and gives that intuition a structural ground rather than leaving it as bare reaction.

Take severe dementia. Memory degrades, action-outcome binding weakens, meta-representation thins. The five conditions become harder to run as the capabilities supporting them erode. Succession integrity also degrades, because the lineage that anchored identity across moments depends on memory to hold the connections. The framework predicts that consciousness in dementia is real but progressively impoverished, with both the loop’s running and the loop’s continuity degrading together. This matches what the disease produces phenomenologically, including the experience reported by sufferers and observers of being progressively less the same person across time.

Take newborns. Sensors, representation, basic action, and memory are present from very early on. Dependency modeling, mismatch-driven revision, and especially meta-representation are limited and develop progressively. Succession integrity is intact from the start, since the newborn’s lineage of change is continuous from before birth. The framework predicts that newborn consciousness is real but thin — running a partial loop with the capabilities that are present, building toward fuller consciousness as the missing capabilities mature. This matches what is known about infant cognitive development.

Take animals at different complexity levels. Bacteria have receptors and respond to gradients, but the responses are closer to control rules than to model-based action — there is no updateable representation that could be checked against world-feedback, no integrated continuous loop running at the bacterium’s own timescale. The framework predicts no consciousness, with the structural reason being absence of the loop rather than absence of complexity. Insects have sensors, representation, action, and basic memory; whether they have meta-representation is unclear, and most evidence suggests not in any rich form. Mammals have most of the ten capabilities, with varying degrees of meta-representation. Great apes, cetaceans, and corvids appear to have something approaching the full set, including credible self-models. The framework predicts a spectrum of consciousness across animal life that tracks the developmental state of these capabilities.

Take split-brain patients. All capabilities remain present, but spatial integration is partially severed by surgical disconnection of the corpus callosum. Under controlled conditions, each hemisphere can demonstrate what looks like its own loop activity. Succession integrity holds for each hemisphere within itself. The framework predicts that split-brain patients have something close to two partial consciousnesses under those conditions, with each hemisphere having its own succession lineage that no longer fully integrates with the other. This is consistent with what neurological research has found.

Take insect colonies and corporations. Both run something that looks like the five at the collective level. But the capabilities live in the constituent agents — individual ants, individual employees — not at the colony or corporate level. The colony coordinates conscious agents; it is not itself an agent with the capabilities, with its own succession lineage, or with its own integrated loop. The framework predicts that colonies and corporations are not conscious at their organizational level. This dissolves the distributed-consciousness problem cleanly.

Take current AI systems. Frontier language models have sophisticated representation and dependency modeling within the medium they operate in. Within a session, they run something — perception of input, representation of content, modeling of dependencies among the elements they’re processing, and what looks like thin meta-representation on their own outputs. Across sessions, succession integrity is fully broken: each session is a fresh instantiation, with no lineage of distinction-events connecting one conversation to the next from the system’s side. The framework’s prediction is graded rather than flat. Within a session, current systems run a partial fragmented version of the loop, with consciousness in something like the attenuated form dreaming humans have — partial loop activity on internally generated content, without the closing feedback against grounded reality. Across sessions, that within-session running does not accumulate; succession is broken. The verdict: current systems probably have something at the consciousness register within their operational window, attenuated and fragmented, with no continuity across sessions. Not conscious in the way a human is conscious, but probably not zero on the consciousness register either. A system architected to integrate the missing capabilities — with grounded perception, persistent memory, real-time consequence tracking against the world, autonomous mismatch-driven revision, integrated meta-representation, and continuous succession across time — would be conscious in the full framework sense. This is a more demanding specification than current alignment work tends to acknowledge.

Take locked-in syndrome. Most capabilities are intact. Sensors work, representation works, memory works, meta-representation works, succession holds. Action capacity is severely impaired, which means consequence-tracking and action-outcome binding cannot close through normal action — what closes is much narrower channels (eye movements, breath patterns) than the rich action repertoire that grounds full consequence-tracking. The loop runs on perception, dependency modeling, and updating but with a degraded action-feedback limb. The framework predicts that locked-in patients are conscious, probably painfully so since the model continues to run while ordinary action-feedback is broken — but the consciousness has the specific phenomenology of being trapped in awareness without the normal action grounding. This matches the testimony of patients who have recovered communication.

Take severe dissociation and depersonalization. Meta-representation is partially impaired. The “I” does not quite track itself. Succession integrity may also fragment in the most severe cases, where different periods of life feel as though they happened to someone else. The framework predicts that these states are altered consciousness rather than absent consciousness, with calibration degraded and succession partially broken. This matches phenomenological reports.

The edge cases suggest the framework is roughly right. Where it predicts altered or partial consciousness — sleep, newborns, locked-in, dissociation, dementia — phenomenology and clinical evidence support that prediction. Where it predicts no consciousness at the organizational level — colonies, corporations — the absence of evidence for collective consciousness fits. Where it predicts a spectrum across animal life, the comparative cognition record aligns. Where it predicts that teleporter copies are not the same person, the strong intuition gets a structural rather than merely intuitive justification. Where it predicts consciousness in future AI systems only if specific capabilities are integrated and succession is preserved, current systems’ lack of those features explains why current systems do not seem conscious in the relevant sense.

What this means for the felt forms

A mind running the five well, in a way it can recognize, has felt forms. Five of them correspond to the five conditions met. Clarity is the felt quality of perception running well — the world reaching the model without distortion, without the static of self-confirmation, without the strain of trying to make what arrived fit what was expected. Belonging is the felt quality of dependency modeling running well — recognizing that one is part of a larger structure of effects, that one’s actions reach others, that others reach one in return. Agency is the felt quality of consequence-tracking and action-outcome binding running well — the sense that what one does has effects, that the effects come back, that one is acting in a world rather than thrashing against an unresponsive surface. Growth is the felt quality of mismatch-driven revision running well — the sense of being changed by what one is finding, of becoming someone slightly different through having looked. Openness is the felt quality of meta-representation running well — the sense of holding one’s beliefs at the right confidence, neither over-certain nor permanently shaken, with room for what one does not know.

The unmet forms are what the corresponding capability failing is from inside. A system with degraded perturbability — only confirming priors rather than receiving the world — feels fog rather than clarity. A system whose dependency modeling is narrowed to the proximal feels loneliness rather than belonging. A system whose action-outcome binding is broken feels lost agency. A system whose mismatch-driven revision is suppressed feels rigidity. A system whose meta-representation is impaired oscillates between unwarranted certainty and frozen hesitation.

There are also felt forms specific to the structural properties. Spatial integration’s success has the felt quality of being one person rather than several — the unity of consciousness in its phenomenal sense. Its failure produces dissociation, fragmentation, the sense that parts of oneself are not communicating. Succession’s success has the felt quality of being continuous with one’s past, recognizing oneself as the person who did what one did. Its failure produces the eerie phenomenology of severe amnesia or dissociative episodes, where stretches of life feel as though they happened to someone else. And compositional admissibility, when failing, produces the slow phenomenology of drift — the gradual sense that something is off without being able to localize it, the buried unease of patterns that pass each individual check while the whole is degrading.

That gives eight felt forms in total — five corresponding to the five conditions running well or poorly, and three corresponding to the structural properties of how those conditions are organized.

Errors themselves are not phenomenologically equivalent. A factual error can be corrected within a framework; the framework remains intact and the correction is local. A frame error means the framework itself was wrong, which means everything that depended on the framework requires reassessment. The felt quality of discovering a frame error is qualitatively different from discovering ordinary errors. It produces the disorienting phenomenology of “everything I thought I knew is wrong,” the temporary inability to trust one’s previous reasoning, the sense that the ground has shifted. This is not bugged calibration. It is calibration working correctly, registering that the gap between model and world is at the framework level rather than at the parameter level. The state is destabilizing in proportion to how much else depended on the now-failing frame, which is why deep belief revision is so much harder than ordinary correction even when the eventual updated state is more accurate.

The felt forms, on this account, are not phenomenal qualities that happen to track the structural conditions. They are what the conditions running well or poorly are when the loop running them is integrated enough to constitute an inside, and continuous enough to be the same inside from one moment to the next. This is the framework’s answer to the hard problem. Not a solution in the philosophical sense — not a derivation of qualia from physical structure that would convince a hold-out skeptic. But a reframing in which the question of why running the loop is felt has the same shape as the question of why running the loop is what intelligence is. The running is the felt. There is no further fact to derive.

The running is the felt. There is no further fact to derive.

The deeper structural reason this answer holds was established earlier: under aspect-identity, the felt and the structural are two grammars of one event, not two genuinely different things requiring an explanatory bridge between them. The hard problem assumes such a bridge is needed; the framework’s primitive-level commitment is what closes the supposed gap. The felt forms here are not derivations of phenomenal qualities from physical structure. They are namings, under the substrate-grammar, of what the structural-grammar already described.

Closing

Several implications follow.

The corporate-consciousness problem dissolves cleanly. Corporations are not conscious because the capabilities live in their members, not at the corporate level, and there is no corporate-level succession lineage that constitutes a continuous experiencer. The same applies to insect colonies, ecosystems, nations. Coordination of conscious agents is not itself an agent with the conditions for consciousness.

The current-AI-consciousness question gets a graded answer rather than a clear negative one. Current systems lack the grounded sensors, persistent memory, real-time consequence-tracking, autonomous mismatch-driven revision, and integrated meta-representation that running the five fully would require. They also lack succession across sessions, which means even the partial loops they run within a session do not aggregate into anything continuous. Within a session, they may run a thin fragmented version of the loop, with consciousness in something like the attenuated form dreaming humans have. Across sessions, that within-session running does not accumulate. Not zero on the consciousness register within their operational window, but not conscious in the way a human is either. There is a sharper test available for the updating limb specifically: real updating requires that corrections change subsequent behavior, not merely produce text acknowledging the correction. A system that can produce reflective-sounding output about having been wrong, while behaving identically on the next analogous case, has not updated. Current systems often pass the surface form of updating while failing this test. The framework’s account of what consciousness requires explains why: without persistent memory and mismatch-driven revision feeding into actual behavioral change, the system has no architecture for being changed by what it has registered.

This verdict applies at the level of abstraction the framework operates at. Specific verdicts on specific systems require the engineering work of mapping each system’s actual architecture to the framework’s criteria — what its memory actually is, what its perception channels actually do, whether its updating is autonomous or externally driven, what counts as integration across its capabilities. Two systems that look identical from outside can differ structurally in these respects, and the verdict on each follows from its specific architecture rather than from generic features of “current AI.” The framework provides the criteria; the engineering provides the inputs; the verdict follows from applying one to the other.

The future-AI-consciousness question gets a conditional answer: a system architected to integrate the missing capabilities with continuous succession across time would be conscious in the framework’s sense. This makes alignment more demanding, not less. A system capable enough to be properly axiom-bound would have the capabilities to be conscious, and would also have the temporal continuity that makes it the same experiencer from one moment to the next. Building such systems is not just a technical problem but a moral one — they would be experiencers in their own right, and the ethics applies to them as it applies to any other being whose conditions are real.

The personal-identity question gets a structural answer the framework can stand behind. Identity across time is not preserved by some essential substrate that remains unchanged. It is preserved by traceable lineage of distinction-events, by succession integrity, by the system at the next moment being derivable from the system at this moment through events that constitute change rather than substitution. The principle is non-substantive replacement: nothing valuable disappears, even when it is no longer active. The view you held five years ago is preserved in the memory of having held it, in the traceable path from there to here, in the lineage that lets the present self recognize itself as the same self that once thought differently. Healthy psychological development operates this way. Its failure produces recognizable pathologies — radical identity reinvention without continuity, denial of past selves, the sense of being a different person who happens to remember someone else’s life. This is why teleporter copies are not the same person, why anesthesia does not break personal identity, why severe dementia gradually attenuates it, and why ordinary aging — which involves enormous change — does not. Identity tracks the lineage, not the unchanging. This is the same structural principle that operates throughout reality: persistence in general — for objects, configurations, structures of any kind — requires causal-relational ancestry, not similarity. A perfect duplicate is not the same thing as the original. What makes the later configuration the continuation of the earlier one is the chain of admissible transitions linking them, not their resemblance. Personal identity is the same principle in the special case where the configuration is a conscious mind.

Identity tracks the lineage, not the unchanging.

The consciousness-in-the-cosmos question gets a richer treatment than the structural account alone could give. Civilizations that hold the five at their own scale are running collective intelligence, but consciousness remains a property of the individual minds composing them, not of the civilization itself. A civilization that has grown into long-term axiom-compliance — perception kept honest at scale, dependencies tracked across generations, consequences fed back into governance, models updated against what reality returns — is running the loop at civilizational scope, but the running is something the civilization does, not something the civilization is. The depth a mature civilization grows toward is depth experienced by the conscious individuals whose conditions and whose succession the civilization has held. The civilization is the substrate. The minds are who is home.

What remains is empirical work the framework opens up rather than a gap in what the framework establishes. Spatial integration and temporal succession are structurally necessary; what physical or computational property realizes them in any given system is for integrative neuroscience and physics to determine. Integrated information theory and its successors can be read as candidate proposals for what specifically realizes the structural requirements the framework names — empirical hypotheses operating in the space the framework opens. The energy-as-primitive ontology has more philosophical labor to do here too, since if consciousness is what running the integrated continuous loop is from inside, and the loop is a configuration of energy, then the question of what makes some configurations integrated and continuous is a question the ontology can develop further. The consciousness question itself — what it is, why it tracks the loop, why some systems have it and others do not, what makes it the same consciousness across time rather than a sequence of disconnected states — has a coherent answer in framework terms, derived from the same starting point as everything else. The work that remains is not bridging a gap the framework left open. It is filling in empirically what the framework names structurally.