Learning Insights on Mathematics and Its Developmental History—Contemporary Complex Mathematics

2021-10-14 chapter

All history is contemporary history. When reviewing the history of mathematical development, all feelings must originate from observations and reflections on contemporary mathematics. Post-war mathematics has evolved into an extremely vast and systematic science. No one can understand all of mathematics anymore, just like Europe's Large Hadron Collider—all novel and complex mathematical objects are proliferating rapidly. Reviewing the history of mathematical development serves to reflect on how to handle the new mathematical objects we encounter today—numerous high-dimensional, nonlinear mathematical objects. Mathematics is fundamentally a discipline that studies quantitative relationships and spatial relationships, whose origins likely trace back to the invention of numbers. The ability to process numbers is not unique to humans; many advanced mammals and birds can master the most basic numerical computational concepts. However, humans used symbolic notation to outsource the memory of large quantities to tools like knotted cords or other implements. This represents an important advancement in mathematical history: humans began to regard some of their cognitive abilities as tools (memory became symbols carved in stone). From this point, humans embarked on the great journey of creating cognitive tools, much of which manifests as mathematical history. For example, Aristotle developed the logical tool of syllogism, while his contemporary Euclid developed the logical tool of "axioms + proof," standardizing some of humanity's symbolic thinking operations into definite symbols (as an analogy, if inventing numbers was studying the properties of numbers, then inventing logical operations was studying the properties of functions related to numbers). Combined with Galileo's experimentally-assisted deductive method, this laid the methodological foundation for modern science. This path led ever further. Cantor's set theory was highly general, and the Bourbaki school's category theory and mathematical structuralism became even more generalized. However, generalization also has capability limits. Gödel's two incompleteness theorems pointed out the essential limitations of axiomatic systems designed purely through logical deduction. Later, Chaitin proposed Chaitin's complexity, showing that not only do consistent (contradiction-free) axiomatic systems have some theorems that are neither provable nor disprovable (incomplete), but this portion of theorems is extremely numerous. The generalist dream of mathematical grand unification fell into decline under the influence of these thinkers. Russell's paradox could become a mathematical crisis in the early 20th century; Gödel's discussion of incompleteness was similarly climactic but no longer universally known. Cohen's forcing method sparked another small climax in set theory, but then fell into obscurity again. Can a generalized theory truly be proposed to represent all mathematical content? From my personal perspective, incompleteness has already hinted at the destination of all such efforts. We can indeed make theories more complex to prove all theorems of the previous smaller theory, but doing so enlarges the theory and generates new unprovable objects. This is probably a characteristic of the deductive logical method itself. The emerging, relatively abstract category theory of the 21st century also originates from applications in frontier condensed matter physics. Abstract and generalized methods must ultimately embrace the real world. Reviewing this history of mathematical grand unificationism, we discover a kind of "pure mathematics" effort—this is essentially a monotheistic belief. Grand unified mathematics is like the God described by Christianity: though lacking humanity, it created the world and is omnipotent. This mathematics that seeks to explain everything is also a kind of objective deity. Pure mathematics researchers pursuing grand unified theory are rigorous and sincere believers, attempting to construct the miracles of the mathematical god through symbols and discover the ultimate mysteries of truth, goodness, beauty, and nature that this deity implies. One cannot say this belief is mad; in fact, explorations in this direction have created enormous cultural wealth for humanity. It's just that this path has become increasingly difficult to sustain. Grand unificationism today has become increasingly cautious. The most recent example I personally know of is the Langlands program connecting number theory and algebraic geometry, and even that dates to the 1960s. Unificationist mathematics has slowed its pace. The contemporary era is the graveyard of grand unificationism. Contemporary mathematics is robust. Thousands of years of civilizational development and hundreds of years of modern mathematical history have enabled contemporary mathematics to accumulate extremely rich knowledge achievements. In fact, grand unification is merely one rather extreme viewpoint in mathematics. More common are other mathematical fields that are relatively abstract but rooted in specific problems: group theory, topology, differential equations, mathematical statistics, operators/functionals, etc. However, contemporary mathematics also faces numerous difficulties. It confronts the troubles I hinted at in the beginning of this text: complex nonlinear mathematical objects. If the problems facing grand unificationism are increasingly complex logical deduction chains in abstract theory, then the various less abstract, specific problem-oriented mathematical fields face extremely large and complex (difficult to reduce) mathematical objects. Why use the word "large"? Using "large" to describe problems is based on problems being counted, which implies a computational mathematics context. The busy beaver problem (BB problem) proposes that for many mathematical problems, one can design a machine that can verify the truth or falsehood of mathematical propositions through massive computation. It's said that computing BB(37) could determine the truth or falsehood of the Riemann hypothesis. However, the computational load of this process is extremely terrifying, such that it cannot be completed in a lifetime. I believe computationalism reveals the problem of "complexity." The "complexity" predicament implied by computationalism is that if we consider proof of a theory to actually be the accumulation of meaningful mathematical and logical symbols, then computers can systematically perform exhaustive searches to discover among them the logically reasonable and effectively conclusive symbolic accumulations of proof. The fact is that many basic mathematical propositions are extremely difficult to find appropriate symbolic accumulations that can prove their truth or falsehood, whether through clever human effort or brute-force machine exhaustion. Moreover, considering what Chaitin's complexity implies—"theorems whose information entropy (complexity) is greater than the information entropy of the axiomatic system cannot have their truth or falsehood proven by that axiomatic system"—one might even suspect that some theorems actually cannot find a proof. Speaking of this, I want to discuss some problems very close to real industrial fields that are not so purely mathematical, such as the properties of solutions to the Navier-Stokes equations related to turbulence problems in fluid mechanics, and the time complexity problems of optimal planning involved in the P versus NP problem in operations research. The former can be viewed as research into the solution properties of dynamical systems in infinite-dimensional spaces, while research into the latter has discovered that all NP problems can be reduced to NP-complete problems. A typical NP-complete problem is the SAT satisfiability problem; if one could solve the SAT satisfiability problem with P time complexity algorithms, one could solve NP problems. In fact, the SAT satisfiability problem can be converted to a logic circuit problem: for a large-scale logic gate circuit growing exponentially in size, can one find inputs that produce given outputs in polynomial time? One is an infinite-dimensional nonlinear dynamical system, the other is input/output prediction for ultra-large circuits. Both involve searching for answers in tiny regions within vast problem spaces. Facing these large, complex mathematical objects, how will contemporary mathematics proceed in the future? This question about the future of mathematics can reflect the answerer's overall view of mathematical history. I first believe that mathematics achieved its brilliant magnificence in the modern era. The modern history of mathematics is a heroic history—geniuses fought to create the basic fields and spiritual outlook of the mathematical world, reaching its peak in the first half of the 20th century before gradually declining. The modern history of mathematics essentially ended with the end of the Cold War. The contemporary history of mathematics probably began in the 2020s of the 21st century. Omnipresent social media, commercialized academic tendencies, engineering-oriented hiring evaluations, and academic journal systems have thoroughly transformed the work and living environments of mathematical researchers. Complex systems and complexity will secretly and continuously guide the direction of mathematical research in research fields. The contemporary history of mathematics has secretly begun in these years. Mathematics is, after all, defined and written in countable, static symbols. I do not believe that symbolic mathematical logical methods can defeat complex mathematical objects that flirt with uncountable numbers, uncountability, infinite dimensions, and phase transitions. Perhaps mathematical methods that can construct mathematical objects without using symbols will be developed in the future? But this remains unimaginable for now. Writing to this point, I have not introduced the complex mathematical objects that I have been explicitly and implicitly drawing out, and in fact, I do not intend to make this introduction. Such an introduction would merely borrow terms from certain mathematical fields to make a characterization that is actually still unclear—after all, if it were clear, it would no longer constitute a problem. Answers often deceive and obscure, while problems can inspire reflection. Contemporary mathematics itself is extremely vast, and the important problems studied in various fields are also extremely vast. All this vastness necessarily causes bewilderment and subsequent confusion. Each person concerned with the future of mathematics who finds themselves in this confusion, including myself—aren't we all confused and chaotic in our hearts? Then perhaps the most likely answer to how contemporary mathematics will proceed in the future is: "I am in complete disarray; I don't know."