Geometrical Neuroscience Laboratory

Mapping the Curvature, Topology, and Dynamics of Neural Function

The Geometrical Neuroscience Laboratory investigates the brain as a geometric system. We study how neural architecture embodies curvature, topology, and information flow. We treat neural organization not as circuitry but as a metric object that evolves in time.

We begin from a strict claim: structure and function are isomorphic. The geometry of the system is the computation of the system. Thought, learning, and perception are expressed as transformations of that geometry. Our work tries to write those transformations down.

Mathematical Foundations of Geometric Neural Dynamics

This section makes explicit the equations and assumptions the lab treats as primary.

1. Coupled Flow Dynamics

The core geometric evolution law we study is:

∂t g_ij = -2 ( R_ij + λ I_ij )

Definitions:

g_ij(t)
The neural manifold metric at time t. It encodes which neural states are "close" or "far" in functional terms.

R_ij
The Ricci curvature tensor of that manifold. This term smooths and regularizes the geometry. It enforces internal coherence.

I_ij
The information-geometric term. We define I_ij as the Fisher information metric:

I_ij = E[ (∂ log p(x|θ) / ∂θ_i) * (∂ log p(x|θ) / ∂θ_j) ]

I_ij measures statistical sensitivity.

It favors encodings that are efficient, discriminative, and low-redundancy.

λ
A coupling constant. It controls how strongly information-theoretic pressure (I_ij) influences geometric reconfiguration relative to intrinsic curvature (R_ij).

Interpretation:

  • R_ij is homeostatic. It pushes the manifold toward smooth, globally consistent organization.
  • I_ij is adaptive. It pushes the manifold toward statistically efficient codes.
  • λ balances stability vs adaptation.

So the lab studies the brain as a system whose internal geometry flows under a joint Ricci–Fisher influence. Biology is modeled as a curvature flow.

Technical mapping note:

The Fisher metric I_ij is defined in information space (parameters θ of an internal model). We project that onto neural state space via a pullback. In other words, we map how informational efficiency pressures act back on the physical manifold (M, g_t).

2. Variational Free Energy Functional

We make “free energy minimization” explicit as a geometric descent.

We define a functional over the metric g:

F[g] = ∫_M [ R(g)  +  λ * D_KL( p_brain || p_world ) ] dμ_g

Definitions:

R(g)
The scalar curvature of the neural manifold. High R(g) reflects geometric stress.

D_KL( p_brain || p_world )
The divergence between the brain’s internal generative model and the actual
sampled environment. This is model-data mismatch, i.e. surprise.

dμ_g
The volume element induced by g. This makes F[g] depend on the geometry itself.

Then we define the update law:

g_dot = - ∇ F[g]

Meaning:

  • The brain changes its own geometry g to reduce two things:
    1. internal geometric distortion (R(g)),
    2. model–world mismatch (D_KL).
  • The manifold relaxes toward a state that is both smoother and better aligned with reality.

This gives a concrete reading for predictive coding:
the system is not just updating beliefs, it is literally reshaping its state space to make prediction cheaper.

3. Structure–Function Isomorphism

The lab uses “structure and function are isomorphic” in a strict technical sense.

We define a correspondence between two levels:

Connectome / network structure
→ Neural dynamical system on a manifold

Formally:

F : Graph_conn → DynSys_neural

Interpretation:

Graph_conn
Objects are structural or functional graphs (connectomes).
Morphisms are allowable rewiring operations.

DynSys_neural
Objects are dynamical systems evolving on manifolds with metrics g_t.
Morphisms are geometric flows that preserve key invariants.

F
Preserves invariants such as homology class (Betti structure) and curvature
profile. That is:

F( H_k(G) ) ≅ H_k( Φ_t(G) )

where H_k is the k-th homology group, and Φ_t is the corresponding neural flow.

Meaning:

  • A change in structural topology is mirrored by a change in functional dynamics.
  • Functional reconfiguration (how the system processes information) can be read back into structural terms.
  • “Surprise is curvature; learning is curvature reduction” becomes literal:
d/dt D_KL( p_brain || p_world ) < 0

local curvature in information space is being flattened

the system is aligning function with structure

So the isomorphism claim is not poetic. It is a claim of categorical preservation: topology ↔ dynamics.

Hypothesis-Driven Research Programs

We present these as explicit hypotheses rather than topics.

1. Curvature of the Connectome

Hypothesis:
Hyperbolic (negatively curved) regions of the connectome, quantified using Ollivier–Ricci curvature, are positively associated with fluid intelligence.
Reasoning: negative curvature supports efficient specialization and segregated parallel processing.

Method:
Estimate local curvature on structural and functional graphs (HCP, ABCD). Regress curvature features against cognitive efficiency measures.

2. Predictive Morphogenesis

Hypothesis:
A neural field governed by:

g_dot = - ∇ F[g]

will self-organize into steady-state curvature profiles that match empirical connectome curvature distributions.

Method:
Simulate Ricci–Fisher flows using empirical priors. Compare the emergent curvature spectra to measured human data. If matched, geometry is sufficient to generate observed integration patterns.

3. Hybrid Morphogenetic Compute Platform

Hypothesis:
Curvature-regularized FPGA and Jetson Orin systems will display convergence dynamics analogous to biological learning trajectories.

Method:
Implement the coupled flow:

∂t g_ij = -2 ( R_ij + λ I_ij )

in reconfigurable hardware. Measure whether the system lowers an energy functional equivalent to F[g] over time. This tests physical realizability.

4. Ricci–Fisher Developmental Flow

Hypothesis:
Developmental changes in cognitive integration are driven by Ricci–Fisher curvature transitions across scales (structural → functional → symbolic).

Method:
Fit curvature parameters to longitudinal developmental neuroimaging and behavioral calibration data. Relate manifold smoothing and curvature reallocation to developmental stage shifts.

5. Topological Reconfiguration in Task Networks

Hypothesis:
Behavioral phase transitions under cognitive load correspond to bifurcations in Betti structure (appearance/disappearance of topological features in functional connectivity over time).

Method:
Use persistent homology to track Betti numbers across rapid task-switching. Link topological breakpoints to performance collapse or control recovery.

Foundational Domains

Differential Geometry
Neural dynamics are modeled as curvature flow.

Topology
Learning is modeled as homological transition.

Information Theory
Entropy–curvature duality links model uncertainty to geometric deformation.

Control Theory
Homeostasis is modeled as curvature stabilization under feedback control.

Statistical Mechanics
The brain is modeled as a nonequilibrium field system under continuous energy minimization.