Holes in the Mind: Classifying Cognitive Tasks with Persistent Homology

12–18 minutes

H2: Introduction

H3: The Problem of Task Classification

Cognitive science faces a fundamental challenge in taxonomy: how do we objectively classify cognitive tasks? Historically, we have relied on scale-dependent, behavioral metrics: reaction time, accuracy, or memory capacity. These measures, while useful, are descriptors of performance rather than process. They tell us how well or how fast a task was executed, but they fail to capture the underlying structure of the cognitive computations involved.

A decision-making task and a motor-control task may have identical reaction-time distributions but engage entirely different neural processes. Furthermore, analyzing the neural data associated with these tasks plunges us into the “curse of dimensionality.” An fMRI scan yields millions of voxel activations; EEG provides high-temporal-density data from dozens of channels. Simply comparing the magnitude of activation in these high-dimensional state spaces is computationally fraught and interpretively ambiguous.

H3: Thesis: Cognition is Classified by Topology, Not Metric

This post advances a formal thesis: The fundamental properties of cognitive tasks are not metric but topological. Cognition traces trajectories through high-dimensional representational manifolds. The functional organization of these tasks is encoded not by the size or variance (the metric) of these trajectories, but by their shape—their structural features, such as connected components, loops, and voids.

We posit that tasks can be robustly classified by their topological invariants, specifically those captured by Persistent Homology (PH). These “homological signatures” are stable, independent of specific coordinate systems, and robust to the noise and deformation inherent in biological data. In this framework, the differences between tasks emerge from the persistence and hierarchy of “holes” in their cognitive state-space. As we will argue, loops, not lengths, define function.

H2: The Problem: Scale-Dependent Metrics vs. Structural Invariants

H3: Limitations of Dimensionality and Magnitude

Traditional analyses of neural data—such as multivariate pattern analysis (MVPA) or graph-theoretic measures on functional connectivity (FC) matrices—are fundamentally geometric. They depend on distances, correlations, and magnitudes. This dependency creates a critical vulnerability: these metrics are highly sensitive to noise, transformations, and the specific embedding of the data. Two neural recordings of the same task may appear metrically different due to subject variability, sensor noise, or hemodynamic lag, even if their underlying computational structure is identical.

This problem is compounded by dimensionality. In a state space of millions of dimensions, the concept of “distance” itself becomes unstable. Furthermore, metrics derived from graph theory, such as clustering coefficient or path length, often fail to capture higher-order relationships within the network. They can describe connections between pairs of nodes, but not the coordinated activity of cliques or the presence of persistent, cyclical patterns of activation (Caputi et al., 2021).

H3: The Need for Invariant Descriptors

To build a robust taxonomy of cognition, we require descriptors that are invariant under common transformations. We need a mathematical language that describes structure separate from implementation. This is the precise function of algebraic topology. Topological invariants, such as Betti numbers, describe the essential “shape” of a space—the number of components, 1-dimensional loops, 2-dimensional voids, etc.

Persistent Homology (PH) provides a computable method for finding these invariants in real-world point-cloud data, such as neural recordings. By tracking how these topological features persist across changing spatial scales, PH distinguishes robust structural features from ephemeral noise. This provides a “topological signature” that is stable, concise, and ideal for classifying high-dimensional, noisy data (Wang et al., 2025).

H2: Theory: Formalizing Task-Space with Persistent Homology

H3: Representing Cognition: State Spaces (Xt​) and Filtrations ({Rϵ​(Xt​)}ϵ>0​)

To apply this theory, we must first formalize our terms.

Key Term: Task State Space (Xt​). We define the cognitive state at any moment as a point in a high-dimensional space. Let Xt​⊂Rn denote the evolving point cloud of neural or representational states observed during a task T. Here, n could be the number of neurons, fMRI voxels, or derived features. The geometry of this point cloud is the task representation (Billings et al., 2021).

Key Term: Simplicial Complex. To analyze the shape of this discrete point cloud, we must connect the points. A common method is the Vietoris-Rips (VR) complex. We pick a distance ϵ and draw an edge between any two points closer than ϵ. We then fill in any k-simplex (triangle, tetrahedron, etc.) whose vertices are all pairwise connected.

Key Term: Filtration ({Rϵ​(Xt​)}ϵ>0​). We do not know the “correct” ϵ to use. A filtration solves this by building a nested sequence of complexes, Rϵ​(Xt​), for all values of ϵ>0. This is analogous to “zooming out” from the data. At ϵ=0, the space is just a dust of disconnected points. As ϵ grows, points connect, forming components and loops. As ϵ grows larger still, these loops are filled in and eventually, the entire space merges into one single component. This dynamic process of building the complex is the filtration (Rieck et al., 2020).

H3: The Homological Signature: Persistence Diagrams (DT​)

Key Term: Persistent Homology (PH). As we build the filtration, we use homology to track the “birth” and “death” of topological features. A new “birth” occurs when a feature (like a loop) first appears at a given ϵ. A “death” occurs when that feature is filled in at a larger ϵ.

Key Term: Betti Numbers (βk​). These numbers quantify the features at a single ϵ. β0​ is the number of connected components. β1​ is the number of 1-dimensional loops (or “holes”). β2​ is the number of 2-dimensional voids (or “cavities”).

Key Term: Persistence Diagram (DT​). This is the formal homological signature of the task. It is a 2D plot, DT​={(birthi​,deathi​)}, where each point represents a single topological feature i. Features with a short lifespan (death ≈ birth) are near the diagonal and are considered topological “noise.” Features with a long lifespan (death ≫ birth) are far from the diagonal and represent robust, persistent structures. This diagram is the stable, invariant descriptor we seek (Rieck et al., 2023).

[FIGURE: Diagram of a filtration and persistence diagram. Caption: (A) A point cloud from a task state space. (B) A Vietoris-Rips filtration, showing complexes at increasing ϵ. A loop (an H1​ feature) is born at ϵ1​ and dies (is filled in) at ϵ2​. (C) The resulting persistence diagram, plotting (birth, death) for all features. The point (ϵ1​,ϵ2​) is far from the diagonal, indicating a persistent feature. Source: GNL illustrative graphic. Takeaway: Persistent homology quantifies the lifespan of topological features, distinguishing robust structure from noise.]

H3: The Formal Thesis: Classification in the Space of Signatures

Our formal thesis, provided by the Geometrical Neuroscience Lab, is as follows:

Tasks differ by their persistence diagrams DT​, not by the absolute geometry of Xt​. Classification thus occurs in the space of homological signatures, where stable H1​ (loops) reflect recurrent dynamics, H2​ encode nested goal structures, and H0​ captures cognitive modularity.

This is a profound shift. We are no longer comparing two massive, noisy point clouds (Xt​ vs. Xt′​). Instead, we are comparing their compact, stable, topological summaries (DT​ vs. DT′​).

H2: Evidence: Interpreting Homological Signatures in Cognition

This theoretical framework is not merely a mathematical curiosity; it has direct, testable interpretations that are increasingly supported by evidence from computational neuroscience.

H3: H0​ (Components) as Cognitive Modularity

The persistence of β0​ features (connected components) maps directly to the modular structure of cognitive processes. A state space that remains clustered into distinct components for a long range of ϵ represents a system with highly modular, segregated functional sub-networks. This has been used to analyze the “homological landscape” of brain functional sub-networks, identifying how different brain systems (e.g., visual, default mode) maintain their topological separation during tasks (Duong-Tran et al., 2024). In disease classification, changes in H0​ persistence can reflect a breakdown in this modularity, such as in Mild Cognitive Impairment (MCI) (Bhattacharya et al., 2025).

H3: H1​ (Loops) as Recurrent Dynamics

The most powerful interpretation, and the core of our thesis, relates to H1​ features. A persistent 1-dimensional loop in a task state space implies a recurrent dynamic. The system repeatedly visits a sequence of states, forming a topological circle rather than a simple trajectory. This is the mathematical signature of any process involving recurrence: working memory maintenance, central pattern generation for motor control, or cyclical emotional states.

Research has identified these “harmonic holes” as a key organizational feature of brain networks (Lee et al., 2018). Furthermore, the topological features of task-based functional connectivity (which are dominated by H1​ signatures) have been shown to be effective predictors of longitudinal behavioral change, linking these persistent loops directly to cognitive outcomes (Argiris et al., 2023). This supports the idea that the presence of recurrence (a loop) is a more fundamental classifier than the speed of that recurrence (a metric).

H3: H2​ (Voids) as Nested Goal Structures

Higher-order homology, such as H2​ (voids), is less intuitive but may be even more powerful. An H2​ feature represents a “cavity” or “void” in the state space. This can be interpreted as a set of nested or hierarchical constraints. For example, a complex task like navigating a maze may have many possible H1​ loops (paths), but these paths themselves are organized around a central “void” (the set of impossible locations or sequences).

This aligns with emerging work on “higher-order connectomics,” which argues that brain function is organized by multi-neuron interactions, not just pairwise ones. Recent work has shown that these local, higher-order topological signatures are highly effective for decoding tasks from brain data, suggesting that H2​ and higher features capture the nested, rule-based structure of complex cognition (Santoro et al., 2024).

H2: Objections and Counterarguments

No single framework is a panacea. The promises of TDA must be weighed against its current limitations.

H3: Computational Feasibility

A primary objection is computational cost. Building a Vietoris-Rips complex is computationally expensive, scaling exponentially in the worst case. While modern algorithms and sparse representations have made this more tractable, applying PH to the raw, time-varying data from an fMRI scan (millions of points in time) remains a significant challenge. However, studies have demonstrated the feasibility of TDA for event-related fMRI by applying it to correlation matrices or embedding the data in lower dimensions first, suggesting these hurdles are logistical, not fundamental (Feasibility of…, 2019).

H3: Distinguishing Structure from Function

A more philosophical objection is interpretive. Does the discovery of a persistent topological feature imply a cognitive function? This is the “structure vs. function” debate. TDA is, by design, blind to the underlying semantics of the data. A critical review by Caputi et al. (2021) notes that TDA is a powerful descriptive tool, but linking a Betti number to a specific behavior requires a strong a priori generative model. Without this, a “hole” is just a hole. Our thesis rests on the assumption that for neural manifolds, persistent structure is function.

H3: Alternative Models (e.g., Dynamical Systems Theory)

Finally, how does TDA compare to more established methods like dynamical systems theory or traditional graph metrics? TDA’s strength is its coordinate-free, invariant description. However, it can be blind to the metric and geometric information (e.g., curvature) that dynamical systems models use. Research directly comparing PH to traditional metrics for segmenting brain states has shown that PH often captures distinct information, particularly about recurrent and metastable states that are missed by simpler variance-based methods (Billings et al., 2021). The ideal approach may be an integrated one, using PH to identify the scaffolding and other methods to analyze the dynamics on that scaffold (Nguyen et al., 2024).

H2: Synthesis: A New Taxonomy for Cognitive Science

H3: From “How Much” to “What Shape”

The evidence and theory converge on a new synthetic framework for cognitive taxonomy. We propose to move classification away from “how much” (magnitude, variance, speed) and toward “what shape” (components, loops, voids). A task is defined by the topological “scaffold” it induces in the neural state space.

This framework elegantly handles variability. Two subjects may perform a working memory task with different speeds and neural activation patterns (high metric variance), but if they both must implement a recurrent dynamic to hold the information, their task-state spaces will both exhibit a persistent H1​ loop (low topological variance). TDA provides the tool to ignore the former and isolate the latter. This has been shown in fMRI studies where TDA captures task-driven structural changes that are consistent across subjects, even when mean activation differs (Catanzaro et al., 2023).

H3: “Loops, not lengths, define function.”

This brings us back to our central interpretation. The function of a cognitive process is defined by its information flow. A process that integrates information from segregated modules will have a high H0​ persistence. A process that maintains information over time must implement a recurrent loop, yielding a persistent H1​. A process that navigates a complex, nested set of rules will trace a manifold with persistent H2​ features.

This is the meaning of “cognition ≈ topology of information flow.” The task is the morphology of its persistence diagram.

H2: Implications for Geometrical Neuroscience

H3: Designing Topologically-Aware Experiments

This framework is not just descriptive; it is predictive. It suggests a new generation of experiments for geometrical neuroscience. Instead of designing tasks that merely modulate cognitive load (a metric concept), we can design tasks that explicitly modulate cognitive topology.

For example, one could design a task with a clear “recurrent” condition (e.g., “repeat the last 5 items”) and a “linear” condition (e.g., “report the last item”). Our framework predicts a persistent H1​ signature should appear in the former but not the latter. We can also design tasks with nested rules to specifically probe for H2​ features. This moves TDA from a post-hoc analysis tool to a predictive, hypothesis-driven scientific instrument (Chung et al., 2023).

H3: New Architectures for Artificial Cognition

This work has direct implications for artificial intelligence. Many successes in deep learning are based on architectures that have a specific topological structure (e.g., the recurrence in an RNN, the local connectivity of a CNN). By formalizing the topological signatures of biological cognition, we can provide a blueprint for “topologically-aware” AI. An AI designed to solve a task with a known H1​ signature might be built with explicit recurrent loops, while an AI for a modular task (H0​) might be built with a “mixture of experts” architecture.

H2: Conclusion

H3: Summary of the Topological Thesis

We have argued that the classification of cognitive tasks, a foundational problem in neuroscience, can be solved by moving from metric geometry to algebraic topology. By representing tasks as high-dimensional point clouds of neural activity, we can use Persistent Homology to compute stable, invariant “homological signatures” (persistence diagrams).

These signatures, which quantify the persistence of H0​ (modules), H1​ (loops), and H2​ (voids), constitute a robust, fundamental taxonomy of cognition. This framework re-defines a task not by its performance metrics, but by the “shape” of its information flow.

H3: Future Directions

The field is moving rapidly to address the limitations of this framework. Static PH is giving way to dynamic topological data analysis, which analyzes how persistence diagrams themselves evolve over time, providing a richer, time-varying signature (Rieck et al., 2023). New work is integrating PH with time-frequency analysis (Topological Time-Frequency…, 2025) and creating multimodal approaches that combine PET and MRI data (Lee et al., 2014).

Perhaps most excitingly, topological signatures are being used to map individual differences in brain dynamics to cognitive ability, moving beyond task classification to individual prediction (Ryu et al., 2023; Wang et al., 2025). The “holes” in our minds, once a metaphor, are becoming a measurable, predictive, and fundamental feature of cognition.


H2: End Matter

H3: Assumptions

Cognition can be embedded in a continuous state manifold; neural dynamics approximate smooth flows; persistent homology captures invariant task structure.

H3: Limits

Ignores metric curvature effects; temporal coupling may blur filtrations; homology classes may conflate different dynamic regimes.

H3: Testable Predictions

Tasks with recurrent subgoals yield high H1​ persistence. Hierarchical tasks show nested H2​. Learning corresponds to stabilization (birth–death convergence) in the persistence diagram.


H2: References

  • Argiris, P. A., Lipman, L., Ez-zizi, A., & Krishnan, A. (2023). Simple topological task-based functional connectivity features predict longitudinal behavioral change. ScienceDirect.
  • Bhattacharya, S., et al. (2025). Persistent Homology for MCI Classification: A Comparative Analysis between Graph and Vietoris-Rips Filtrations. Frontiers.
  • Billings, J., et al. (2021). Simplicial and Topological Descriptions of Human Brain Dynamics. Brain Dynamics Lab.
  • Caputi, C. G., et al. (2021). Promises and Pitfalls of Topological Data Analysis for Brain Networks. ScienceDirect.
  • Catanzaro, J. P., et al. (2023). Topological Data Analysis Captures Task-Driven fMRI Structural Changes. PMC.
  • Chung, M. K., et al. (2023). Unified Topological Inference for Brain Networks in Temporal… ScienceDirect.
  • Duong-Tran, P. A., et al. (2024). Homological Landscape of Human Brain Functional Sub-Networks. MDPI.
  • Feasibility of Topological Data Analysis for Event-related fMRI. (2019). MIT Press Direct.
  • Lee, H., et al. (2014). Integrated Multimodal Network Approach to PET and MRI Based on Multidimensional Persistent Homology. arXiv.
  • Lee, H., et al. (2018). Harmonic Holes as the Sub-modules of Brain Network and Network Dissimilarity. arXiv.
  • Nguyen, V. Q., et al. (2024). Volume-Optimal Persistence Homological Scaffolds of Hemodynamic Networks Covary with MEG Theta-Alpha Aperiodic Dynamics. arXiv.
  • Rieck, B., et al. (2020). Uncovering the Topology of Time-Varying fMRI Data Using Persistent Homology. NeurIPS Proceedings.
  • Rieck, B., et al. (2023). Dynamic Topological Data Analysis of Functional Human Brain Networks. arXiv.
  • Ryu, K., et al. (2023). Persistent Homology-based Functional Connectivity and its Association with Cognitive Ability: Life-span Study. PMC/ResearchGate.
  • Santoro, A., et al. (2024). Higher-order Connectomics of Human Brain Function Reveals Local Topological Signatures of Task Decoding. Nature.
  • Topological Time-Frequency Analysis of Functional Brain Networks. (2025). arXiv.
  • Wang, B., et al. (2025). Topological Signatures of Brain Dynamics: Persistent Homology Reveals Individuality and Brain–Behavior Relationships. Frontiers.

Discover more from Savva Laboratories

Subscribe to get the latest posts sent to your email.