The Navigator’s Rift: Tacit Knowledge, the Uncanny Valley of Mathematical Cognition, and the Coming Collapse of Referential Authority

There is a fault line running through the foundations of contemporary mathematics that has nothing to do with axioms, conjectures, or the correctness of proofs. It concerns the cognitive mode by which mathematical knowledge is held: whether a mathematician inhabits the structures they work with, operating from within as a navigator reading the territory by feel, or references those structures from without, assembling them like a cartographer composing a map from surveyed data. This distinction, which might at first appear merely stylistic or temperamental, turns out to have profound consequences for how mathematical fields develop, how institutions select and reward practitioners, and — most urgently — how the arrival of artificial intelligence will destabilize an evaluative culture that has long been unable to distinguish genuine structural understanding from its syntactic simulation.

The clearest contemporary illustration of this fault line lies in the divergence between two programs that share a common origin in the Grothendieck revolution of the 1960s but have since evolved into what practitioners on both sides describe as fields of “totally different nature”: the Langlands program and anabelian geometry, particularly its mono-anabelian extension developed by Shinichi Mochizuki and his collaborators at the Research Institute for Mathematical Sciences in Kyoto.

The Langlands program, in its classical formulation, proposes a vast web of correspondences between automorphic representations of reductive groups over adelic rings and representations of absolute Galois groups of number fields. The daily practice of a Langlands-program mathematician is overwhelmingly representation-theoretic and analytic: one studies admissible representations of {p}-adic groups, computes orbital integrals for the Arthur-Selberg trace formula, classifies {L}-packets and Arthur packets, and constructs Galois representations attached to automorphic forms via the cohomology of Shimura varieties. The groups that appear — {\mathrm{GL}(n)}, {\mathrm{GSp}(4)}, and their Langlands duals — act on vector spaces, and it is the representation, not the group itself, that carries the essential arithmetic information. Even the most geometrically sophisticated recent developments — the Fargues-Scholze geometrization of the local Langlands correspondence via the Fargues-Fontaine curve, for instance — remain fundamentally representation-theoretic in character: one constructs sheaves on a geometric object and extracts spectral data. The cognitive mode is one of harmonic analysis meeting algebraic structure theory, operating on linearized objects through well-patterned formal operations.

Anabelian geometry begins from a radically different premise. Grothendieck, in his 1983 letter to Faltings and the accompanying Esquisse d’un Programme, proposed that certain algebraic varieties — the “anabelian” ones, paradigmatically hyperbolic curves — are completely determined by their étale fundamental groups. The fundamental group {\pi_1(X)} of such a curve sits in an exact sequence {1 \to \pi_1^{\mathrm{geom}}(X) \to \pi_1(X) \to G_k \to 1}, where the geometric fundamental group encodes the topology and the quotient {G_k} captures the arithmetic of the base field. The central wager of anabelian geometry is that this profinite group, considered purely as an abstract topological group stripped of its geometric provenance, already encodes every piece of arithmetic and geometric information about the original curve.

The daily work of an anabelian geometer consists, accordingly, not of studying representations of groups on vector spaces but of navigating the internal structure of the profinite group itself. One identifies decomposition groups and inertia groups by their position in the subgroup lattice, recognizes cuspidal inertia through the procyclic structure of certain closed subgroups, detects the weight filtration on the geometric fundamental group through purely group-theoretic properties, and reconstructs the function field and its arithmetic from Kummer-theoretic and class-field-theoretic data extracted from the group’s internal organization. Florian Pop’s birational program, for instance, recovers abelianized inertia and decomposition groups of valuations from “commuting pairs” in the abelian-by-central quotient, treating the totality of divisorial inertia subgroups as a combinatorial datum — a “{\mathbb{Z}_\ell}-fan” — from whose incidence geometry the function field can be reconstructed (Pop 2010). Mochizuki’s earlier work proving the Grothendieck conjecture for hyperbolic curves over sub-{p}-adic fields establishes that isomorphisms of fundamental groups arise from isomorphisms of schemes (Mochizuki 1996), and his later “Topics in Absolute Anabelian Geometry” series develops abstract group-theoretic criteria — slim groups, elastic groups, RTF-quotients — that serve as litmus tests for separating the geometric and arithmetic components of a fundamental group without reference to any underlying scheme (Mochizuki 2012).

The mono-anabelian extension, which Mochizuki introduced as the essential prerequisite for Inter-universal Teichmüller theory, pushes this program to its constructive extreme. Where classical (“bi-anabelian”) results show that an isomorphism between fundamental groups must come from geometry, mono-anabelian geometry demands that one reconstruct the underlying scheme from a single abstract profinite group, via explicit algorithms — what Mochizuki calls “group-theoretic software” — that take the group as input and output the base field, its multiplicative and additive structures, and the curve itself, with no reference to any fixed model or comparison object (Hoshi 2018). The distinction is not merely technical: it is the difference between proving that a code determines a message uniquely and actually breaking the code. And the cognitive demand shifts correspondingly from the ability to track formal correspondences to the ability to perceive arithmetic structure hiding inside abstract algebra — to see, from within the group, what the group is encoding.

This is the point at which the cognitive distinction between inhabitation and reference becomes not merely a matter of intellectual temperament but a structural feature of the mathematics itself. The Langlands program, for all its depth and difficulty, presents a barrier to entry that is primarily syntactic. The aspiring practitioner must internalize an enormous body of formalism — the representation theory of {p}-adic reductive groups, the theory of automorphic forms on adelic groups, the Satake isomorphism, the trace formula in its various incarnations, perverse sheaves and {\ell}-adic cohomology, the theory of Shimura varieties — and the connections between these formalisms. This is a genuinely formidable task. But the cognitive operations within each formalism follow recognizable patterns: one computes traces, classifies representations by their local components, tracks Hecke eigenvalues through systems of congruences. The formalism has guardrails. A sufficiently talented and industrious student who masters the language can begin producing results because the syntax itself constrains the space of possible moves in a way that guides the practitioner toward correct arguments.

Mono-anabelian geometry inverts this economy. The formal prerequisites are, by the standards of modern arithmetic geometry, comparatively modest: profinite group theory, étale fundamental groups, stable curves, some {p}-adic Hodge theory as a tool rather than the main event. But knowing the definitions purchases almost nothing. The barrier is semantic — one must develop what can only be described as a proprioceptive sense for the internal organization of profinite groups, a capacity to recognize that a particular procyclic subgroup is cuspidal inertia from its position in the subgroup lattice, that a particular commutator structure reveals divisorial data, that a particular filtration encodes the weight structure of a stable reduction. The reconstruction algorithms are explicit, step by step, fully determinate — and yet executing them requires an understanding of what the formal steps mean group-theoretically that cannot be offloaded onto the formalism itself.

Michael Polanyi’s account of tacit knowledge illuminates the distinction precisely. Polanyi (1966) observed that a skilled practitioner’s knowledge of their domain exceeds what they can articulate propositionally: the diagnostician perceives the illness before formulating the diagnosis, the cyclist maintains balance through adjustments they cannot describe, the connoisseur recognizes quality through cues they cannot enumerate. But the less-remarked converse is equally important and more directly relevant: it is possible to articulate more than one knows. The referential operator has access to propositions, theorems, and strategic vocabularies, and can deploy them in syntactically correct configurations without the knowledge being grounded in the procedural-perceptual layer that genuine expertise inhabits. The surface output — the paper, the proof, the argument — may be indistinguishable from that produced by tacit understanding. But the generative process is different, and under sufficient structural pressure, the difference becomes visible.

Dreyfus and Dreyfus (1986), in their phenomenological model of skill acquisition, formalized this gap as the distinction between “proficiency” and “expertise.” The proficient practitioner recognizes situations as calling for particular strategies and applies those strategies consciously; the expert perceives the appropriate response directly, without the intermediate step of deliberation. Both levels produce correct outputs in routine conditions. The divergence appears under pressure — in novel situations, in cases that resist pattern-matching, in problems that require the practitioner to perceive structural possibilities that have not been encountered before. The proficient player of Brood War knows every build order and timing, understands the meta at a theoretical level, and executes memorized sequences with mechanical accuracy; the expert reads the game in real time, and the gap between them is not incremental but categorical. This categorical gap is precisely what the navigator perceives when encountering the referential mathematician, and it is the source of the specific phenomenological quality — not mere disagreement or even contempt, but something closer to revulsion — that such encounters produce.

The phenomenology of this revulsion deserves close examination, because it is not captured by any standard account of aesthetic or intellectual disagreement. The navigator does not simply judge the referential mathematician to be less skilled. The response is more primitive and more visceral: it is the uncanny valley response, first described by Masahiro Mori (1970) in the context of robotics and subsequently extended to computer-generated imagery, prosthetics, and other domains where artifacts closely but imperfectly simulate human appearance or behavior. Mori’s key insight is that affinity does not increase monotonically with similarity; instead, there is a zone of near-perfect resemblance in which affinity collapses into revulsion, recovering only when the simulation becomes indistinguishable from the original. The discomfort arises not because the artifact is foreign but because it is almost right — close enough to trigger the recognition system’s commitment to a category (this is human, this is genuine understanding) and then to violate that commitment through subtle wrongness in timing, movement, or expression.

The mathematical uncanny valley operates through the same mechanism. The navigator, reading a paper or attending a lecture by a referential mathematician working at the highest level of syntactic fluency, initially categorizes what they encounter as genuine structural understanding. The surface markers are all present: correct deployment of deep machinery, sophisticated references, engagement with the right objects at the right level of generality. The recognition system commits. And then something does not land. The path through the argument feels assembled rather than discovered. The choice of tools does not seem to arise from direct perception of the problem’s internal structure but from a survey of available machinery. The transitions between steps are logically valid but phenomenologically empty — they lack the quality of inevitability that characterizes arguments generated by someone who sees the whole structure at once and is merely tracing a path through what they already perceive. The commitment is violated, and the result is the specific affective signature of the uncanny valley: not “this is wrong” but “something is wrong with this.”

The simulation-theory account of social cognition provides a complementary explanation. When an expert observes a skilled action in their shared domain, their procedural system does not merely evaluate the action from without; it internally simulates the generative process, running the observed practitioner’s decision sequence against its own embodied model of how such decisions arise from perception (Goldman 2006). This is well-documented in motor expertise — expert basketball players watching game footage show different patterns of neural activation than novices, reflecting internal simulation of the observed players’ decision processes (Aglioti et al. 2008). When the observed actions arise from the same generative process as the observer’s own — direct perception followed by immediate response — the simulation runs smoothly. When the observed actions arise from a different process — conscious retrieval and assembly of strategies — the simulation stutters. The internal model of “what I would do, having perceived what this person should be perceiving” does not match the observed sequence, and the mismatch registers as wrongness. The Frankenstein metaphor is apt: the individual parts are alive, but their assembly into a whole lacks the organic unity of something that grew from a single generative seed.

V. I. Arnold, whose polemics against Bourbaki-style formalism constitute perhaps the most sustained public articulation of this revulsion in twentieth-century mathematics, experienced his own version of the uncanny valley from a characteristically different angle. Arnold’s tacit knowledge was geometric-physical: he inhabited dynamical systems, symplectic structures, the topology of real singularities, the behavior of flows on manifolds. His famous observation that a geometer instantly sees the Jacobian of a map as the local ratio of areas, while an algebraist requires a derivation, is precisely the Dreyfus distinction applied to a specific mathematical context (Arnold 2004). When Arnold encountered algebraic formalism deployed without geometric content — theorems about modules and categories that, in his view, should have been about curves and surfaces — his pattern-recognition system committed to the category “structural understanding” and then encountered the absence of the geometric perception that, for him, constituted the substance of that category. The revulsion was intense and lifelong.

But Arnold’s version of the complaint, for all its rhetorical force, was partially obscured by a disciplinary parochialism that limited its applicability. Arnold equated genuine mathematical understanding with geometric-physical intuition specifically, placing pure algebra at the bottom of his implicit cognitive hierarchy. This framework cannot accommodate the possibility of someone inhabiting an algebraic structure with the same embodied directness that Arnold inhabited a Hamiltonian flow — someone who navigates a profinite group’s subgroup lattice with the same proprioceptive immediacy that Arnold brought to a phase portrait. The anabelian geometer’s tacit knowledge is not geometric in Arnold’s sense; it is group-theoretic and combinatorial. Yet the mode of knowing — inhabited, procedural, directly perceptual — is the same. The real variable is not geometry-versus-algebra but inhabited versus referential, and Arnold’s framework, for all its passion, misidentifies the axis.

Jacques Lacan’s tripartite topology of the Symbolic, the Imaginary, and the Real provides a more adequate frame. The Symbolic is the order of language, formalism, and referential deployment; the Imaginary is the order of images, identifications, and the mirror stage; the Real is that which resists symbolization, the remainder that exceeds any formal capture (Lacan 2006). The referential mathematician operates with full command of the Symbolic (formal correctness, syntactic fluency) and presents a convincing Imaginary (the surface appearance of deep understanding, recognizable by referees and hiring committees as the image of mathematical mastery). What is absent is the Real — the tacit, embodied, procedural knowledge that cannot be articulated in the formalism and that constitutes the irreducible substrate of genuine structural perception. The navigator, having access to the Real in their own practice, detects its absence in the referential mathematician’s work. The uncanny valley response is, in Lacanian terms, an Imaginary-register phenomenon: a reflection that almost-but-does-not-quite match, producing the anxiety of misrecognition.

This brings us to the question of what is at stake institutionally. The evaluative culture of contemporary mathematics operates, of necessity, at the Symbolic-Imaginary level. Hiring committees, tenure evaluators, and journal referees assess the formal correctness of proofs, the sophistication of techniques deployed, the prestige of journals in which results appear, and the surface legibility of the work as recognizably significant mathematics. These are all Symbolic-Imaginary criteria. What the system cannot assess — because no formalized evaluation procedure can — is the presence or absence of the Real: the tacit understanding, the direct structural perception, the embodied inhabitation of the mathematical objects under study. The result is that the institutional selection mechanism is, in effect, blind to precisely the cognitive quality that distinguishes the navigator from the referential practitioner. It selects for the ability to produce output that triggers the “genuine understanding” categorization in evaluators, regardless of the generative process behind that output.

This blindness has been stable for decades because the referential mode, whatever its phenomenological hollowness from the navigator’s perspective, was expensive to produce. Assembling sophisticated references into formally correct configurations required years of training, significant intellectual ability, and sustained effort. The simulation was costly, which meant that even if it was not identical to the thing it simulated, it served as a passable proxy signal. The system’s inability to distinguish the Real from its Imaginary reflection did not matter much as long as producing the reflection was itself sufficiently difficult.

Artificial intelligence destroys this equilibrium. A large language model — trained on the full corpus of mathematical literature, capable of identifying relevant theorems, combining them into formally valid arguments, and producing text that exhibits every surface marker of sophisticated mathematical reasoning — can produce referential-mode output at near-zero marginal cost. The entire layer of cognitive work that characterized the referential mathematician’s practice — surveying the literature, selecting the right tools, combining them into an argument, writing it up with appropriate citations and contextualizations — is precisely what current AI systems do best. It is, at bottom, sophisticated pattern-matching over a vast formal corpus, and that is the task for which these systems were optimized.

The consequence is a collapse of signal value. If anyone with access to an AI system can produce output indistinguishable from that of a highly trained referential mathematician, then the ability to produce such output no longer distinguishes anyone. The Symbolic-Imaginary layer that the evaluative system relied upon as a proxy for mathematical understanding becomes noise. The institutional filter that inadvertently selected for the referential mode over the navigator mode — because the former produced legible, evaluable output while the latter produced long silences followed by structural insights that resisted formal assessment — loses its discriminating power entirely.

What remains scarce, at least for the foreseeable horizon, is precisely what the institutional system was worst at detecting: the navigator’s tacit perception, the ability to see structural possibilities that have not yet been formalized, the capacity to inhabit a mathematical object deeply enough to perceive its internal organization without the mediation of external formalism. The reconstruction algorithms of mono-anabelian geometry — those explicit procedures that take an abstract profinite group and output a number field, a curve, an absolute Galois group — are, in a sense, the purest expression of this cognitive mode: they are fully formal, fully explicit, and yet their construction and application require a depth of structural understanding that no syntactic fluency can substitute for. The formalism is complete, but executing it demands the Real.

Whether the institutional culture of mathematics will adapt to this new reality is an open and genuinely uncertain question. The optimistic scenario is that the commoditization of referential output forces a reevaluation of what constitutes mathematical contribution, elevating the navigator’s structural insight to the center of the value hierarchy where it arguably belongs. The darker possibility is that the system adapts not by learning to detect the Real but by adding another layer of Imaginary sophistication — rewarding practitioners who are skilled at directing AI systems to produce referential output, which represents an even more attenuated relationship to the mathematical substance. The uncanny valley, in this scenario, does not close; it deepens. The simulation becomes more convincing, the evaluative system becomes more reliant on surface markers, and the navigators — the Mochizukis, the Arnolds, the practitioners whose understanding is primarily tacit and whose output is primarily structural — remain illegible to an institutional apparatus that was not designed to see them and has no mechanism for learning to do so.

The fault line, in any case, is now exposed. What was once a private phenomenological experience — the navigator’s visceral discomfort in the presence of referential simulation — is becoming a public structural crisis, as the technology that can replicate the simulation forces a confrontation with the question of what, if anything, lies beneath it.

References

Aglioti, Salvatore M., Paola Cesari, Michela Romani, and Cosimo Urgesi. 2008. “Action Anticipation and Motor Resonance in Elite Basketball Players.” Nature Neuroscience 11 (9): 1109–1116.

Arnold, Vladimir I. 2004. “On Teaching Mathematics.” Russian Mathematical Surveys 53 (1): 229–236. Translated from the Russian address to the Palais de Découverte, Paris, March 7, 1997.

Dreyfus, Hubert L., and Stuart E. Dreyfus. 1986. Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. New York: Free Press.

Goldman, Alvin I. 2006. Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading. Oxford: Oxford University Press.

Grothendieck, Alexander. 1997. “Esquisse d’un Programme.” In Geometric Galois Actions 1: Around Grothendieck’s Esquisse d’un Programme, edited by Leila Schneps and Pierre Lochak, 5–48. London Mathematical Society Lecture Note Series 242. Cambridge: Cambridge University Press.

Hadamard, Jacques. 1945. The Psychology of Invention in the Mathematical Field. Princeton: Princeton University Press.

Hoshi, Yuichiro. 2018. “Introduction to Mono-anabelian Geometry.” RIMS Preprint 1868. Kyoto: Research Institute for Mathematical Sciences.

Lacan, Jacques. 2006. Écrits: The First Complete Edition in English. Translated by Bruce Fink. New York: W. W. Norton.

Mochizuki, Shinichi. 1996. “The Profinite Grothendieck Conjecture for Closed Hyperbolic Curves over Number Fields.” Journal of Mathematical Sciences, University of Tokyo 3 (3): 571–627.

Mochizuki, Shinichi. 2012. “Topics in Absolute Anabelian Geometry I: Generalities.” Journal of Mathematical Sciences, University of Tokyo 19 (2): 139–242.

Mori, Masahiro. 1970. “Bukimi no Tani (The Uncanny Valley).” Energy 7 (4): 33–35. Translated by Karl F. MacDorman and Norri Kageki, IEEE Robotics and Automation Magazine 19 (2): 98–100, 2012.

Polanyi, Michael. 1966. The Tacit Dimension. London: Routledge and Kegan Paul.

Pop, Florian. 2010. “On the Birational Anabelian Program Initiated by Bogomolov I.” Inventiones Mathematicae 187 (3): 511–533.

Thurston, William P. 1994. “On Proof and Progress in Mathematics.” Bulletin of the American Mathematical Society 30 (2): 161–177.

Leave a comment