The early days of January 2026 have presented the mathematical community with a paradox that demands a rigorous epistemological accounting. On one hand, the technocratic press has breathlessly celebrated the resolution of Erdős Problem #728 by an artificial intelligence, framing it as the arrival of synthetic reasoning. On the other, we observe a conspicuous, almost deafening silence from the frontiers of nonlinear partial differential equations (PDE) and hard analysis, coupled with the machine’s documented failure to crack the “bespoke” combinatorics of the most recent International Mathematical Olympiad (IMO). This divergence is not a function of computational difficulty, nor is it a random artifact of training data. Rather, it exposes a fundamental topological distinction in the landscape of mathematical truth, a distinction between the automation of retrieval and the necessity of architecture.
To understand why the machine conquered the Erdős conjecture while stalling on the Navier-Stokes regularity or the subtle tiling invariants of the IMO, one must anatomize the solution to Problem #728. The conjecture, which concerns the integrability of factorial ratios subject to logarithmic constraints, was not solved through the generation of a novel mechanism. Instead, the solution relied on a high-dimensional act of lemma arbitrage. The machine effectively identified that the problem was a latent corollary of the probabilistic machinery developed by Carl Pomerance in the mid-1990s. The “jump” in logic, which appeared alien to human observers, specifically the move from fractional part distributions to arithmetic divisibility without intermediate carry counting, was merely a vector collapse. The machine recognized that the distribution of fractional parts described in the problem fell within the “safe harbor” established by Pomerance’s prior theorems on the prime factors of binomial coefficients.
In this light, the Erdős solution serves as a harsh critique of the problem’s intrinsic depth. A cynical but mathematically precise heuristic suggests that a competent nonlinear analyst, perhaps even one suffering from neurological trauma sufficient to erase their knowledge of refined arithmetic, could have solved this problem by viewing it through the lens of phase space concentration. To an analyst, the valuation is simply a mass measure, and the divisibility condition is a decay estimate. The problem’s “logarithmic slack” (
) renders it a “soft analysis” exercise masquerading as a rigid arithmetic puzzle. The artificial intelligence did not act as a mathematician in the generative sense; it acted as a Super-Librarian, performing a contextual retrieval operation that bridged an “orphan” problem to its forgotten parent theorem. It demonstrated not the creation of a new concept, but the exploitation of an inefficiency in the human citation graph.
Contrast this with the silence in nonlinear PDE. There are no press releases announcing AI proofs in the Annals of Mathematics regarding global well-posedness for supercritical wave equations because the strategy of library retrieval fails in the face of “Bespoke Craft.” In hard analysis, there is no universal “Theorem X” waiting in the library to be applied. The difficulty of a dispersive equation lies not in bridging standard lemmas, but in the ex nihilo construction of a functional invariant specific to that equation’s geometry and nonlinearity. Consider the “Interaction Morawetz” estimate, a pivotal breakthrough in the study of the nonlinear Schrödinger equation. The proof required the invention of a bilinear weight function, , designed to force the decay of a specific error term arising from a prior “almost-conservation” law. This weight could not be interpolated from existing literature because it had never existed. Its construction required a physical intuition about particle scattering translated into a precise mathematical form, where a deviation in a single exponent would destroy the coercivity of the functional.
This is the “Bespoke Moat.” The search space for such invariants is not the smooth, statistical manifold of number theory, but an infinite-dimensional, jagged landscape where “almost correct” is structurally identical to “false.” The failure of the machine on the “Problem 6” combinatorics task operates on the same principle. While the machine could solve geometry problems by searching the stable space of synthetic axioms, it failed the tiling problem because the solution required the invention of a bespoke invariant, a novel weighting scheme or topological path logic, that had no precedent in the training corpus. When specific combinatorics problems require the construction of a Lyapunov function in disguise, they cease to be “discrete math” susceptible to tree search and become “discrete analysis,” inheriting the same resistance to automation as the Navier-Stokes equations.
The algorithmic era is thus functioning as a valuation mechanism, an acid test that separates “Inflationary Mathematics” from “Structural Mathematics.” For decades, the incentive structures of the discipline have tolerated a certain species of epistemic rent-seeking: the production of literature that consists of applying powerful, existing tools to slightly permuted objects, applying the Probabilistic Method to Graph A, then to Graph B. The AI solution to Erdős #728 proves that this “grift” of corollary mining is now a zero-marginal-cost activity. If a neural network can instantly sweep the parameter space to connect existing theorems to open conjectures, the human labor dedicated to such tasks is rendered obsolete.
Paradoxically, this automation of the trivial serves as a powerful vindication for the “messy” fields that were previously sidelined for their lack of elegance. The silence of the machine in the face of the eighty-page estimates typical of geometric analysis proves that this “sludge” was not a failure of exposition, but a reflection of incompressible complexity. The necessity of hand-crafting equation-specific weights, of weaving together mutually dependent estimates into a fragile “house of cards,” is a feature of deep architecture, not a bug. The machine is stripping away the credit formerly given to the cataloger of corollaries and concentrating it entirely on the architect of concepts, the Pomerances, the Zeilbergers, the Klainermans, who forge the tools rather than merely using them. The artificial intelligence is not destroying mathematics; it is merely automating the mediocrity out of it, leaving the fortress of bespoke invention more secure than ever.
References
Colliander, James, Markus Keel, Gigliola Staffilani, Hideo Takaoka, and Terence Tao. 2008. “Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in .” Annals of Mathematics 167 (3): 767–865.
Klainerman, Sergiu. 2000. “PDE as a Unified Subject.” Visions in Mathematics, GAFA 2000 Special Volume, Part I: 279–315.
Pomerance, Carl. 1996. “On the set of prime factors of .” Proceedings of the American Mathematical Society 124 (1): 87–102.
Tao, Terence. 2026. “Machine Assisted Proofs and the Future of Number Theory.” What’s New (blog). January 8, 2026.
Tensorgami. 2025. “Bespoke Craft vs. Framework Power: Why Nonlinear PDE Endures.” Tensorgami (blog). September 28, 2025.
Zeilberger, Doron. 1993. “Closed Form (Pun Intended!).” Contemporary Mathematics 143: 579–607.
Leave a comment