On Infinite-Order Robustness and Subject Transferability: A Re-discussion of Ideas, A Brief Program for Multi-Agent Biomimetics, and the Effectiveness of Medicine

2021-06-29 chapter

I recently reconstructed a concept: robustness (resilience), a term originally describing a system's ability to recover normal performance when receiving unexpected inputs. "Unexpected" actually points to what lies "beyond the horizon" of the model designer's vision. The model designer anticipates some possible exceptional situations and designs specific behaviors for the model to handle these unexpected inputs. However, when this design is completed, the model exhibits "so-called resistance" to inputs that would otherwise cause "disorder" - all of this is the model designer's own manipulation, and the "unexpected" has always existed. The model designer defines an acceptable input set A, and an emergency contingency set B for exceptional situations, but there is also set C that has never been conceived of and cannot even be defined or perceived - this is the limitation of the horizon itself. The model has no horizon of its own but exists entirely through borrowing the model designer's horizon. This is "non-transferability," which I also call "medium-independence," just like a symbolic model that is merely assembled from symbols - it doesn't care whether particles, cells, or humans serve as its object set, nor does it possess the subjectivity of any of the above existences.

If we consider robustness as a dynamic process, then each "perception" by the model designer of the "current non-robust state" and "robust expectations" is a process where the designer "monotonically" injects their own "subjectivity" into the model. We might view designers from different worldlines as different layered models. For example, suppose there is now a symbol set A (corresponding to objects manipulated by different theories), and the language B it generates (the grammar characteristic of theories), which describes some models Ci (specific model cases), all used to describe designer D(i), while these are all constructed by designer D(i-1). These different theoretical worlds are stacked into a sequence of layers (with a beginning but no end), and the driving force for transformation between worlds is the completeness expansion that improves robustness - that is, the expansion that must occur when the models of that world encounter "illegal input/incomprehensible input." By connecting various paths through completeness expansion, a network is formed.

A simple insight is that any node on this network has no fewer than one "source" and no fewer than one "target." Another insight that may not necessarily be correct is that this network can represent any formally unambiguous (interpretable & unambiguous), finitely long (constructible in reality) formal model. That is, any model can find illegal inputs against it, and any model can find methods to destroy its unambiguous logic. For any point on the robustness network, determining a base point allows comparison of the "relative strength" of certain models' robustness. If one model can handle N more exceptional situations than another model (relative to a certain base point), then that model is considered N orders higher than the other model. Infinite-order robustness refers to a model that can account for infinitely many exceptions compared to other models, thus possessing "infinite-order robustness." It should be noted that destruction of a model can either involve throwing out bizarre outputs or imagining a Turing machine that writes the model and then interfering with that Turing machine's behavior - essentially a form of "second-order destruction."

This concept has guiding significance. I can provide a "strange" computational method: for a formal model, suppose the character set describing it is A. Now randomly swap two characters describing the model or add a character to a sentence - how will the model's behavior change regarding the predetermined input-output set? Obviously, many programs' behaviors will crash, but the key point is the results of crashes under different conditions. One insight is that, assuming theoretical worlds are constructed level by level - A constructs B, B constructs C - the crashes caused by destroying descriptor B should on average be fewer than those caused by destroying descriptor A. Another insight is that very short description lengths have relatively low robustness. Highly redundant "unwieldy" model structures can obviously accept robust shocks and even exist within "low-intensity ambiguity destruction." Robustness can be considered a dual concept to entropy - entropy creates surprise & destruction while robustness creates immunity & resilience. From this, we can discover that the above insights are also a corollary of information entropy theory.

The reason I introduce robustness at length here is that I want to preemptively introduce a viewpoint: cyborgs must be complete (having infinite-order robustness). This means that for any components (broadly speaking, various constituents) used to compose cyborgs, no destruction can cause descriptive errors in the cyborg's "mother world." Poetically speaking: can this mother world be considered some god who dominates everything? But she must be the world itself that each subject gradually discovers on the path of understanding the world, her connotation so vast and boundless that everyone can discover themselves within her. A simple corollary of the cyborg completeness viewpoint is: any destruction of the elements of the subject describing the cyborg will not bring ambiguity and program errors, but will only be rationalized as behavior internal to the world. For existing cyborg entities like humans, this obviously holds true. This viewpoint can also be expressed as "the world has no bugs, humans have no bugs, and AI must also have no bugs." The world will not produce a banknote in a vacuum - unable to explain the particle behavior of the banknote - world crash. Therefore, biomimetics is necessary.

As mentioned above, all unambiguous & finitely long models of humans do not possess infinite-order robustness. The only descriptive model that enables cyborgs to have infinite-order robustness is one with "multi-agent biomimetic" properties, because it can inherit the biomimetic nature universally possessed by various existences in the world. How this implementation proceeds will be discussed further below.

Regarding this implementation, I will re-discuss an old topic: subject transferability, and chat about some new wine in old bottles. "Living" complex systems are obviously medium-dependent: constrained to certain bases and possessing certain continuous structures. A vivid metaphor is that a human is a large collection of cells and also a larger collection of particles. When humans observe a piece of solid matter, it is both a renormalization group calculation of numerous solid particles and the transmission of "protein-neural" photoelectric stimulation. Symbolic forms are medium-independent and can be established on any base. The "medium-dependence" of symbolic forms is actually the "medium-dependence" of the model designer; symbolic forms are the extended cognition of the model designer, machines of the designer's operational "thoughts," identical in this respect to physical machines. Therefore, the strong dependence of neural networks on data's "Garbage in, Garbage out" and the strong dependence of simulation models on expert-induced prior rules are understandable.

However, there is a question here: why are large-scale neural networks particularly effective on many simulation problems? My viewpoint is: simulation models have multiple expression methods, and neural networks, unlike structurally rigorous reasoning schools, actually use what is essentially a computational graph to replace the simulation process, where numerical differences of computational nodes, computational directions, and update functions simulate "reasoning structures." Reasoning programs composed of functionally clear, step-by-step reasoning structures have "all reasoning programs = all neural networks = all nonlinear functions" in terms of maximum expressive capability, and very likely "all nonlinear functions = all Turing machines." In terms of specific expressive capability, neural networks excel in flexibility, using numbers to simulate structures, where flexible and subtle numerical changes can represent extremely rich reasoning structures. You believe ANNs excel in biomimetics, which I support, but overall I think ANNs adopt a "partially multi-agent biomimetic" approach. If ANNs are to simulate cyborgs, adding some major overhauls may still be necessary. For instance, ANNs' strong dependence on data still indicates that ANNs do not possess cyborg nature, since data is also medium-independent symbols, and ANNs still serve as "machines of thought" for "cyborg in & cyborg out."

The previously discussed "humans have no bugs" is based on a specific observational level. From the logic of the entire world, humans have no bugs, but from the organizational level of cells, humans have some bugs. Switching different base points (actually switching a set of means for interpretation and model construction) brings structurally different bugs. I want to name this phenomenon: Why is medicine effective? / Why is mechanics effective? / Why are machines effective? And further discuss certain natural structures of effective theories, which also leads to the possibility of communication with cyborgs. Structurally different bugs are some kind of "theoretical phase transition cracks," thus creating "abnormal intervention zones" that are spontaneously generated. This metaphor, figuratively speaking, is that our human body, as an extremely large and complex system with an enormous number of cells, can have many diseases treated by drugs made of inorganic molecular substances, and surgical knives can actually remove certain necrotic tissues to heal people - this itself is a miracle. How do "abnormal zones" emerge? How are they generated? Do they have fixed structures? These are extremely important questions. Briefly: how do medium-independent forms emerge and distribute in medium-dependent complex systems? Our communication with cyborg entities may depend on effective exploration of this question, since we obviously find it difficult to communicate "collectively" and "directly" with "society" or "cells." But we should be able to communicate with "humans."

This letter ends here. Finally, let me mention the program for multi-agent biomimetics: adopt a multi-agent biomimetic-based approach (which can be combined with certain high-level languages/algorithms) to construct implementable cyborg entity models and interact effectively.

I would like to hear you discuss your discoveries about the I Ching. What exactly is the I Ching?

Having read your views on the I Ching and recalling some of your previous musings, it's clear that you may not yet have reached the essence of "metaphor" in subjective model design.

Symbolic computation requires input, which relies on the medium-independence of symbols - certain symbols must equally be introduced as input. The I Ching requires no input, which may point to the medium-dependence of the I Ching. Objects with "medium-dependence" are "non-formal" and very likely "alive." This makes me inevitably raise a question: Is the I Ching alive? In my view, the answer should be "yes." As you said, the I Ching's images are "non-derivative" (the images themselves are subjects). If medium-independent ("derivative" objects, actually extensions of the model designer's subjectivity) symbolic forms are used to "utilize & understand" it, it becomes dangerous and chaotic. The model designer cannot "instrumentalize-symbolize" the "images" to make them derivable and thus "symbolically understandable." Images are "overlaps" of numerous symbols. An "infinite-order robustness horizon" understanding is that formulae with clear intentions manipulated by symbol designers are "disorderly" destroyed, and symbols aggregate according to certain peculiar interaction rules. For example, if we stipulate that adjacent micro-transistors in a large circuit will "actively" interact & overlap into new self-organizations, then the formally strict and clear symbol tables and formal computations that the model designer planned to add will be dissolved and become chaotic and meaningless. Is this how images appear to designers?

I can provide a not-so-distant example: the Tierra silicon-based life model. Like most model designers, the author extended some of their prior concepts into the model, designing time slice preemption to simulate limited resources, designing harvesters to simulate species life and death, designing memory length to simulate limited space... These made Tierra more "symbolically interpretable," but most importantly, the design the author introduced to imitate genetic variation was: "randomly flip a digit of the program (i.e., the species) during each reproduction." Actually: self-replicating programs are species, each self-replication is reproduction, and "self-replication" actually consists of a series of assembly code (i.e., machine language). This design was brilliant in introducing incomprehensible and chaotic "flipping." This model is represented using machine language, then high-level languages, and further additional environments, yet allows "destruction" of the machine language. Can this be understood as artificially introducing some incomplete "image"?

I look forward to further investigation of the I Ching. How should true biomimetics be designed? The answer to this question may become increasingly clear...