H1: Representation vs. Structure: A Quotient-Based Framework for Morphogenetic Compute
H2: Introduction: The “1/2 = 2/4” Problem
H3: The Core Tension: Syntactic vs. Structural Identity
The equation 1/2=2/4 appears trivial. It is not. This simple identity encodes a fundamental tension at the heart of mathematics, neuroscience, and intelligent systems: the distinction between representation and structure. The strings “1/2” and “2/4” are syntactically distinct representations. They are written differently, occupy different bits of memory, and consist of different numerals. Yet, mathematics declares them equal, a single invariant object we call “one-half.”
This distinction is the central challenge for any system that must process information. How can a system recognize that two different patterns in its internal state refer to the same external reality? How can a biological or artificial agent maintain a stable function—a stable identity—when its own internal components are constantly changing, drifting, or reconfiguring?
H3: Thesis: From Rational Numbers to Morphogenetic Compute
This article formalizes the distinction between representation and structure using the mathematical logic of quotient spaces. We will demonstrate that the same logic that defines rational numbers as equivalence classes (S=R/E) governs how vectors, groups, and dynamical systems are defined. We then show how this principle reappears in neural population codes, representational manifolds, and adaptive computation.
Our core thesis is that for any intelligent system, and especially for Morphogenetic Compute or ISL (In-situ Learning) systems, this distinction must be engineered. A robust system must be able to change its representational substrate (R) without violating specified structural invariants (S). We will derive a formal correctness contract for this process and outline a practical protocol for designing ISL systems that maintain functional stability under representational drift.
H3: Article Structure and Key Definitions (R, S, E, π, f)
This post develops this argument in stages. We first formalize the “1/2 = 2/4” problem as a quotient space. We then generalize this schema, show its parallels in neuroscience, and outline a protocol for ISL systems.
To maintain rigor, we will use the following notation, which is locked for this discussion:
- R (Representation Space): The space of all possible encodings. These can be tokens, coordinates, neural population states, or specific hardware realizations.
- S (Structural Space): The space of invariants, quotients, or canonical task-relevant content. This is the space of “meanings.”
- E (Equivalence Relation): The rule on R that defines when two representations are “the same.”
- π:R→S (Quotient Map): The map that sends each representation r∈R to its structural identity π(r)=[r]E in S. Thus, S=R/E.
- Rc⊂R (Canonical Subspace): A set of preferred, unique representatives, with exactly one representative for each equivalence class.
- f:R→Rc (Canonizer): A decoder or reduction function that selects the unique canonical representative for any given r. It implements the quotient map, satisfying f(r)=f(r′)⇔π(r)=π(r′).
H2: The Problem: Defining Structural Identity
H3: Formalizing the Rational Number Example
We begin by formally defining our motivating problem. The strings “1/2” and “2/4” differ as representations in R. Mathematics declares them equal in S. This is achieved by defining the space of rational numbers not as single fractions, but as a set of equivalence classes.
Definition. A rational number is an equivalence class of ordered pairs (a,b) from Z×(Z∖{0}) (pairs of integers where b is non-zero), under the equivalence relation E:
(a,b)∼(c,d)⇔a⋅d=b⋅c
Under this rule, the pair (1, 2) is equivalent to (2, 4) because 1⋅4=2⋅2. The “number” 1/2 is therefore the entire equivalence class [(1,2)]E, which is an infinite set: {(1,2),(2,4),(−1,−2),(3,6),…}.
Equality in the structural space S means structural identity (membership in the same class), not syntactic equality in the representation space R. The invariant being preserved is the ratio a/b. This formal distinction is the foundation of structuralism in mathematics [3], [2].
H3: Proposition 1: The Role of the Canonical Form
While the structural object S is the entire class, it is computationally and conceptually useful to pick one “best” representative for it. This is the role of the canonical subspace Rc and the canonizer f.
Proposition 1 — Canonical Form. Define Rc={(a,b)∈Z×Z:gcd(∣a∣,∣b∣)=1,b>0}. This is the set of all fractions in lowest terms with a positive denominator.
Define a canonizer function f(a,b)=(a/g,b/g), where g=sign(b)×gcd(∣a∣,∣b∣).
This function f maps every pair (a,b) to its unique reduced representative in Rc. For example, f(2,4)=(1,2) and f(−3,−6)=(1,2). The canonizer f provides a concrete implementation of the equivalence relation E:
(a,b)∼(c,d)⇔f(a,b)=f(c,d)
H3: Interpretation: Identity as an Equivalence Class
This small example reveals our core principle. Identity is not a coordinate; it is membership in a quotient class that preserves invariant content. The representation may vary; the structure persists. This is not just a mathematical curiosity. It is a universal schema for how information is organized, drawing from sources like [1] and [5].
H2: The Theory: The (R, E, π, S) General Schema
H3: Universal Invariance Under Transformation
The (R,E,π,S) schema is universal. The same logic applies whenever we must quotient out “nuisance” variability to get at a core, invariant structure.
H3: Application 1: Vector Spaces and Change of Basis (GL(n,R))
In linear algebra, the representation space R is Rn, the set of all coordinate vectors. The structural space S is the abstract vector space itself. The equivalence relation E is defined by the change-of-basis group G=GL(n,R), the group of all invertible n×n matrices.
Two coordinate vectors r1,r2∈R represent the same abstract vector s∈S if r1=g⋅r2 for some g∈G. The abstract vector s is the orbit of r under the action of G. If we restrict G to the orthogonal group O(n) (rotations and reflections), the invariants preserved are fundamental geometric properties like norms and inner products [11].
H3: Application 2: Klein’s Erlangen Program
In geometry, Felix Klein’s Erlangen Program (1872) proposed that a “geometry” (like Euclidean or Projective) is defined by a space X and a transformation group G. The true “objects” of that geometry are not the points themselves, but the properties of X that remain invariant under the action of G [10]. This is a direct application of the S=R/G (or S=X/G) principle.
H3: Application 3: Group Theory and Isomorphism
In abstract algebra, the structural space S is the set of group isomorphism classes. The representation space R is the set of all possible group presentations (e.g., using generators and relations). The quotient map π sends a specific presentation in R to the abstract group structure (its isomorphism class) in S. Two presentations are equivalent if they define isomorphic groups.
H3: Application 4: Dynamical Systems and Topological Conjugacy
In dynamics, we must ask when two systems are “the same.” Two dynamical systems (ϕt,X) and (ψt,Y) are considered structurally equivalent (topologically conjugate) if there exists a homeomorphism (a continuous map with a continuous inverse) h:X→Y such that h∘ϕt=ψt∘h. This h maps the orbits of one system to the orbits of the other, preserving the qualitative structure. The structural space S is the space of all dynamical systems modulo this conjugacy relation [39].
In all these cases, E specifies when two encodings in R represent the same structure in S, and π is the formal map that collapses this representational redundancy.
H2: Evidence and Examples: Invariance in Biological Systems
H3: The Neural Parallel: Stable Function from Unstable States
This abstract mathematical framework finds a direct and critical application in neuroscience. Neural systems exhibit massive trial-to-trial variability. The exact spike train from a population of neurons in response to the same stimulus is never identical. Furthermore, the representations themselves “drift” over time [18].
Despite this, perception and behavior remain remarkably stable. This is only possible if the brain implements the same “1/2 = 2/4” principle. Different neural states (representations) in R can encode the same percept, plan, or value in S.
Formally, the system must operate such that π(r1)=π(r2) even when r1=r2 in R. A downstream “decoder” f must be invariant to these superficial differences, extracting only the stable, task-relevant structure.
H3: Example 1: Motor Control and Redundancy
In motor control, the representation space R is the high-dimensional space of all possible muscle activation patterns. The structural space S is the much lower-dimensional task-space, such as the 3D trajectory of a hand. There are many (infinitely many) redundant muscle patterns in R that can yield the exact same end-effector trajectory in S [19]. The brain’s task is to find one sufficient pattern, not one specific, unique pattern. The invariant is the task-space kinematics [17].
H3: Example 2: Visual Object Constancy
In vision, R is the space of all possible retinal images. S is the space of object identities (e.g., “cat,” “chair”). Countless images in R—differing in pose, lighting, scale, or position—all map to the single structural identity “cat” in S. Hierarchical processing in the visual cortex (e.g., [14]) can be seen as a cascade of transformations that explicitly quotient out this nuisance variation, preserving only the identity-relevant information [15], [22].
H3: Example 3: Spatial Cognition and Place-Cell Remapping
In spatial cognition, hippocampal place cells form a map of an environment [23]. This map (R) is known to “remap” or “drift” over days, meaning the specific neurons that fire for a given location change. However, the animal’s ability to navigate (S) remains perfectly stable. The downstream decoders must be updating to track this representational drift, maintaining a stable mapping from R to S [24].
H2: Potential Objections and Counterarguments
This (R,S,E) framework is powerful, but it raises immediate practical and theoretical objections.
H3: Objection 1: The “Black Box” Problem (Is f learnable?)
The entire framework hinges on the existence of a robust canonizer, f. We have assumed f can be engineered or learned. However, finding f is equivalent to solving the neural decoding problem, which is notoriously difficult, especially in high-dimensional, low-data regimes. If R is a 106-dimensional neural state space, identifying the correct equivalence relation E and learning a robust, generalizable map f:R→Rc is a massive, and perhaps intractable, learning problem [26].
H3: Objection 2: Computational Cost and Tractability
Assuming f is known, what is the computational cost of enforcing this framework? A system must either continuously compute f(r) or, more demandingly, ensure that any internal reconfiguration Φ:R→R obeys the correctness contract (discussed next). Verifying I(π(r))=I(π(Φ(r))) for all r and all invariants I could be computationally prohibitive. The information-geometric complexity of navigating R while constrained to an invariant submanifold in S may be non-trivial [30].
H3: Objection 3: Non-Stationary Tasks (Evolving Invariants)
This framework implicitly assumes that the structural space S and the invariants I are fixed. It is a framework for stability. What happens in a non-stationary task, where the goal itself changes? In this case, the system must not only adapt its representations in R but also modify the equivalence relation E and the set of invariants I. This is a problem of continual learning or active inference, where the agent must infer its own structural rules, not just its state within them [36].
H2: Synthesis: A Protocol for Morphogenetic Compute
Despite these objections, the (R,S,E) framework provides the most rigorous foundation for designing systems that can adapt. The synthesis of this framework is a practical design pattern.
H3: The Formal Bridge: Group Actions, Orbits, and Decoders
We can formalize the equivalence relation E using the mathematics of group actions. Let a symmetry group G act on R. The orbits of this action, G⋅r={g⋅r∣g∈G}, are the sets of all elements in R that are equivalent to r. These orbits are the equivalence classes. The structural space S is the quotient space S=R/G.
In our rational number example, the equivalence on R=Z×(Z∖{0}) is generated by a monoid (not quite a group, as it lacks inverses) action: k⋅(a,b)=(ka,kb) for any non-zero integer k. The orbit of (1, 2) is the set {(k,2k)∣k∈Z∖{0}}, which is precisely the equivalence class for 1/2. The canonizer f is the function that reduces each orbit to its unique “lowest-term” representative in Rc.
This provides our design pattern: specify invariance as a quotient.
- R: The representation manifold.
- E: The admissible equivalence relation (generated by a group G or monoid M).
- S=R/E: The structural space.
- f: The canonizer implementing the quotient map π.
The guarantee we need is: if r1∼r2 (i.e., they are in the same orbit), then f(r1)=f(r2).
H3: Defining the Correctness Contract for ISL Systems
We can now state the core requirement for ISL or morphogenetic compute. These systems are defined by their ability to self-reconfigure, for example, by changing their architecture or synaptic weights via a process Φ:R→R.
The system remains “the same agent” only if this reconfiguration preserves the defined structural invariants.
Let I={I1,…,Ik} be a set of invariant-producing functions, where each Ij:S→Vj maps the structural state to some observable value (e.g., a behavior, a decoded value, a conserved quantity).
The Correctness Contract: For any admissible reconfiguration Φ:R→R, and for all r∈R, the following must hold for every invariant I∈I:
I(π(r))=I(π(Φ(r)))
Any reconfiguration Φ that violates this contract is a geometric constraint violation. It means the system’s “re-wiring” has broken its “meaning.”
H3: A 5-Step Practical Protocol for ISL Design
This formal contract yields a 5-step practical protocol for designing robust ISL systems.
- Declare Invariants. Explicitly identify the task-level properties I={Ij} that must remain fixed. These could be decoder outputs, conserved energies, physical constraints, or policy commitments.
- Model Symmetries. Define the transformations (a group G, monoid M, or explicit relation E) that specify admissible reconfigurations. What kinds of changes to R are allowed?
- Engineer the Quotient. Implement a decoder f:R→Rc that produces canonical forms. This function f is the computational implementation of the quotient. Verify that f(r1)=f(r2) whenever r1∼r2.
- Constrain Adaptation. Design the adaptation/learning mechanisms (Φ) to be aware of the invariants. Apply regularizers or architectural guards that explicitly block or penalize any update Φ that would violate the correctness contract.
- Probe Stability. Empirically test the system. Inject perturbations Δr into the representation space (e.g., r′=r+Δr) and confirm that f(r)=f(r′).
H2: Implications: Why Explicit Invariance Governs Intelligence
H3: Correctness as Structural, Not Representational
The “1/2 = 2/4” problem matters because it forces us to redefine “correctness.” For an intelligent system, correctness is structural identity (in S), not representational equality (in R). A system that overfits to a specific representation is brittle. A system that operates on the quotient space S is robust.
H3: The Necessity of Canonicalization (f)
This framework forces the explicit engineering of the quotient map π and its computational implementation, the canonizer f. We can no longer treat the decoder as an afterthought; it is the central component that defines the structural space.
H3: The Test: Perturb R, Verify S
This provides a clear, formal test for robustness: perturb the representation space R and verify that the output of the canonizer f(r)—the structural identity in S—remains invariant. This is the experimental probe for “geometric constraint violations.”
H3: Failure Mode: Functional Collapse via Drift
Without an explicit invariance specification, representational drift will inevitably lead to the collapse of functional identity. If R drifts and f does not co-adapt, the mapping is broken. The system will fail. The (R,S,E) framework is not just a descriptive tool; it is a prescriptive design requirement for any intelligence that must endure.
H2: Conclusion
H3: Summary of the Argument
We began with the simple equation 1/2=2/4. We formalized this as a quotient space S=R/E, where “identity” is defined by membership in an equivalence class, not by a specific representational token. We demonstrated this (R,E,π,S) schema is a universal principle of information organization, applying equally to mathematics, geometry, dynamics, and neuroscience.
This led us to a concrete design protocol for morphogenetic compute (ISL). A robust, adaptive system must be built upon an explicit “correctness contract” that ensures functional invariants I are preserved even as the system’s internal representations Φ:R→R reconfigure. The key components are the explicit definition of invariants and the engineering of a canonizer f that implements the quotient.
H3: Final Outlook
This discussion has focused on population-level invariants, not the specific molecular mechanisms that might implement them. We have assumed that a robust, computable canonizer f can be engineered or learned, though we noted this is a significant challenge.
Future work must formalize the co-design of the controller (which adapts Φ) and the decoder (which implements f). The central task is to create systems that can certify their own invariants during online plasticity and even under conditions of partial hardware failure or reconfiguration. The “1/2 = 2/4” problem is, in effect, the challenge of building a mind that knows what it is, even as the “stuff” it is made of changes.
H3: Assumptions
- Functional identity is structural, not representational.
- Admissible reconfigurations Φ and equivalence relations E can be modeled.
- A robust, computable canonizer f exists.
H3: Limits
- Does not address non-stationary tasks where invariants evolve.
- Excludes energetic and molecular constraints.
H3: Testable Predictions
- ISL systems with explicit invariance specs (I) will maintain stable performance under representational drift.
- Violations will manifest as unstable decoding, motor errors, or symbolic fission—forms of geometric constraint violation.
- Information-geometric regularizers penalizing movement in R orthogonal to invariant submanifolds will improve robustness.
H2: References
[1] S. Mac Lane, Categories for the Working Mathematician, Springer, 1971. [2] N. Bourbaki, Elements of Mathematics: Theory of Sets, Springer, 2004. [3] P. Benacerraf, “What numbers could not be,” Philosophical Review, 74(1), pp. 47–73, 1965. [4] S. Awodey, “Structure in mathematics and logic: A categorical perspective,” Philosophia Mathematica, 4(3), pp. 209–237, 1996. [5] S. Awodey, Category Theory, Oxford Univ. Press, 2010. [6] F. W. Lawvere and S. H. Schanuel, Conceptual Mathematics, Cambridge Univ. Press, 1997. [7] T. E. Forster, Sets, Classes, and Categories, Oxford Univ. Press, 1992. [8] H. Weyl, Symmetry, Princeton Univ. Press, 1952. [9] E. Noether, “Invariante Variationsprobleme,” Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, 1918. [10] F. Klein, “Vergleichende Betrachtungen über neuere geometrische Forschungen,” 1872. [11] S. Lang, Algebra, Springer, 2002. [12] A. Grothendieck, Pursuing Stacks, manuscript, 1983. [13] B. C. Pierce, Basic Category Theory for Computer Scientists, MIT Press, 1991. [14] D. H. Hubel and T. N. Wiesel, “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex,” J. Physiol., 160, pp. 106–154, 1962. [15] E. P. Simoncelli and B. A. Olshausen, “Natural image statistics and neural representation,” Annu. Rev. Neurosci., 24, pp. 1193–1216, 2001. [16] B. A. Olshausen and D. J. Field, “Sparse coding of sensory inputs,” Curr. Opin. Neurobiol., 14, pp. 481–487, 2004. [17] M. M. Churchland et al., “Neural population dynamics during reaching,” Nature, 487, pp. 51–56, 2012. [18] M. D. Golub et al., “Learning by neural reassociation,” Nat. Neurosci., 21, pp. 607–616, 2018. [19] P. N. Sabes, “The planning and control of reaching movements,” Curr. Opin. Neurobiol., 10, pp. 740–746, 2000. [20] N. Kriegeskorte, M. Mur, and P. Bandettini, “Representational similarity analysis,” Front. Syst. Neurosci., 2, 2008. [21] C. Cadieu et al., “Deep neural networks rival the representation of primate IT cortex,” PLoS Comput. Biol., 10, e1003963, 2014. [22] D. L. K. Yamins and J. J. DiCarlo, “Goal-driven deep learning models of sensory cortex,” Nat. Neurosci., 19, pp. 356–365, 2016. [23] J. O’Keefe and J. Dostrovsky, “The hippocampus as a spatial map,” Brain Res., 34, pp. 171–175, 1971. [24] E. I. Moser, E. K. Moser, and J. O’Keefe, “Spatial representation in the hippocampal formation,” Annu. Rev. Neurosci., 31, pp. 69–89, 2008. [25] K. D. Harris et al., “Temporal structure in hippocampal output,” Nature, 404, pp. 552–556, 2000. [26] L. Paninski and J. P. Cunningham, “Neural dimensionality reduction,” Curr. Opin. Neurobiol., 37, pp. 70–77, 2016. [27] J. P. Cunningham and B. M. Yu, “Dimensionality reduction for large-scale neural recordings,” Nat. Neurosci., 17, pp. 1500–1509, 2014. [28] B. M. Yu et al., “Gaussian-process factor analysis for low-dimensional single-trial analysis,” NIPS, 2009. [29] S.-I. Amari, “Information geometry on hierarchy of neural nets,” Neural Networks, 1, pp. 75–86, 1988. [30] S.-I. Amari, Information Geometry and Its Applications, Springer, 2016. [31] G. Carlsson, “Topology and data,” Bull. Amer. Math. Soc., 46, pp. 255–308, 2009. [32] K. Fukushima, “Neocognitron,” Biol. Cybern., 36, pp. 193–202, 1980. [33] D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” arXiv:1312.6114, 2013. [34] N. Tishby and N. Zaslavsky, “Deep learning and the information bottleneck,” Proc. ITW, 2015. [35] K. Friston, “The free-energy principle,” Nat. Rev. Neurosci., 11, pp. 127–138, 2010. [36] K. Friston et al., “Active inference and learning,” Neurosci. Biobehav. Rev., 68, pp. 862–879, 2016. [37] R. Bogacz, “A tutorial on the free-energy framework,” J. Math. Psychol., 76, pp. 198–211, 2017. [38] J. Tschantz et al., “Scaling active inference,” Entropy, 22, 2020. [39] R. Abraham and J. E. Marsden, Foundations of Mechanics, Benjamin/Cummings, 1978. [40] V. I. Arnold, Mathematical Methods of Classical Mechanics, Springer, 1989. [41] S. Watanabe, “Algebraic topology and learning systems,” IEEE Trans. Syst. Man Cybern., 6(7), pp. 445–452, 1976. [42] L. Buesing, J. H. Macke, and M. Sahani, “Learning stable dynamical systems in the brain,” Curr. Opin. Neurobiol., 58, pp. 86–94, 2019. [43] F. W. Lawvere, “Functorial semantics of algebraic theories,” Proc. Natl. Acad. Sci. USA, 50(5), pp. 869–872, 1963. [44] S. I. Amari and H. Nagaoka, Methods of Information Geometry, AMS, 2000. [45] H. Whitney, “The singularities of smooth manifolds,” Ann. Math., 45, pp. 247–293, 1934.
