Discussion on Infinite-Order Robustness, Subjectivity Transfer, Multi-Body Bionics, Medical Effectiveness, and Subjectivity Modeling
Email 1 — Me
“On the idea of infinite-order robustness and a further discussion of subjectivity transfer; a brief program of multi-body bionics & the effectiveness of medicine?”
Recently I reconstructed a concept: robustness. The original intent of the word is the capacity of a system to recover normal performance when it receives unexpected input. “Unexpected” actually points to what lies outside the model designer’s field of view. The model designer hypothesizes some unforeseen situations and designs specific behaviors for the model to cope with those unexpected inputs.
But once that design is completed, the model exhibits a so-called resistance toward inputs that otherwise would have caused “disorder.” All of this is the designer’s own manipulation; the “unexpected” has always been there.
The designer defines an acceptable input set A, and an emergency contingency set B for unexpected situations, but there is also a set C—never thought of, not even detectable in terms of definition—that is a limitation of the horizon itself. The model has no horizon of its own and exists entirely by borrowing the designer’s horizon. This is “non-transferability,” which I also call “medium-independence,” like a symbolic model just pieced together from symbols. It does not care whether particles, cells, or humans serve as its object set; it possesses none of the subjectivity of any of these existents.
If we regard robustness as a dynamic process, then every time the designer perceives the “non-robust present state” and the “robustness expectation,” that is a monotonic injection of the designer’s own subjectivity into the model. We can view designers on different world-lines as a stacking of different models: suppose now there is a symbol set A (objects of different theoretical manipulations), and the language B it generates (syntax of the theory), used to describe some models C_i (specific model cases); all of these describe the designer D(i), and these in turn are all constructed by designer D(i–1).
These different theoretical worlds stack into one layered (beginningless/endless) column. The driving force for transformation between worlds is the completeness-expansion that elevates robustness: i.e., assuming the model of that world encounters “illegal inputs / unintelligible inputs,” forcing an expansion. By connecting each path through completeness-expansions they weave a net. A simple insight: ANY node on this net has no fewer than one ‘source’ and no fewer than one ‘target’. Another (not necessarily correct) insight: this net can represent any formally interpretable (explainable & unambiguous), finite-length (constructible in reality) formal model. That is, for any model you can find illegal inputs; for any model you can find a way to destroy its unambiguous logic.
Fixing a base-point on the robustness net you can compare ‘which model is stronger or weaker’ in robustness. If, relative to some base-point, one model adds handling for N more classes of unexpected situations than another model, we say it is higher by N “orders.” Infinite-order robustness means that compared to some set of other models it covers infinitely many unexpected cases; hence that model has “infinite-order robustness.”
Note: Destroying a model can mean tossing it bizarre outputs, or hypothetically taking the Turing machine that writes the model and intervening in its behavior—essentially a “second-order destruction.”
This concept is instructive. I can give a “strange” calculation method: for a formal model described by a character set A, now randomly swap two characters used in describing the model, or insert a character into some statement—what happens to the model’s behavior over the intended input–output set?
Obviously many programs collapse. The point is the outcomes of collapse under different conditions. An insight: assuming theoretical worlds are constructed hierarchically (A builds B, B builds C), breaking the describer B on average should yield less collapse than breaking the describer A; another insight: very short description length has lower robustness; high-redundancy, ‘long-tailed’ model structures clearly can absorb robustness shocks—indeed they themselves live within “low-intensity ambiguous damage.”
Robustness can be viewed as the dual of entropy: entropy creates surprise & damage while robustness creates immunity & strength—so the above insights are also a corollary of information-entropy thinking.
Spending so many words on robustness is because I want to front-load a viewpoint: a cyborg must be complete (infinitely robust). This means that for every component used to build the cyborg (all generalized constituents), nodestruction can cause a descriptive error in the cyborg’s “mother world.” Poetically: that mother world could be regarded as some God that governs all? Yet she is actually the world itself, discovered step by step by each subject in the journey of knowing the world; her intension is so vast that everyone can find themselves in her.
A simple corollary of the cyborg completeness view: any destruction of meta-components describing the cyborg’s subjectivity does not produce ambiguity or program errors; it is only rationalized as internal behavior of the world. For existing cyborg bodies like humans this evidently holds. This viewpoint can also be phrased: “The world has no BUG. Humanity has no BUG. AI must also have no BUG.” A world cannot spontaneously pop a banknote into vacuum—unable to explain the particle behavior of the note—world collapses.
Therefore bionics is necessary. I already noted no human-constructed model of finite, unambiguous length has infinite-order robustness. The only descriptive model granting a cyborg infinite-order robustness is one of “multi-body bionic” character, because it can inherit the universal bionic nature present across existents in the world. How that implementation proceeds I will discuss below.
On such implementation I revisit an old topic: subjectivity transfer, and chat about some old wine in new bottles.
A “living” complex system is evidently medium-dependent: constrained on some base with some continuous structure. A vivid metaphor: a human is a mass of cells and a larger mass of particles; when a human views a piece of solid matter it is simultaneously the renormalization computation of massive solid particles and the transmission of “protein–nerve” photoelectric stimulations.
Symbolic form is medium-independent and can be built on any base; the “medium-dependence” of symbols is actually the “medium-dependence” of the model designer. Symbolic form is extended cognition of the designer—a machine of “ideas” operated by the designer—equivalent in that respect to a physical machine.
Thus the strong dependence of neural networks on “garbage in, garbage out” data, and the strong dependence of simulation models on expert-inducted priors, is understandable.
Yet a question: why are large-scale neural networks especially effective in many simulation tasks? One view: simulation models have multiple expression forms; neural networks, unlike rigid inference schools, use a computation graph in place of an explicit simulation process. Differences in node values, computation directions, and update functions simulate a “reasoning structure.” A reasoning program composed of clear-function, step-distinct reasoning structures, in maximal expressive capacity, may have “all reasoning programs = all neural networks = all nonlinear functions,” and likely “all nonlinear functions = all Turing machines.”
In concrete expressive capacity, neural networks win in flexibility—using numbers to simulate structure; delicate numeric perturbations can represent extremely rich reasoning structures. You think ANN wins by bionics—I support that—but overall I think ANN follows a “partially multi-body bionic” idea; for ANN to simulate a cyborg, some major blood-transfusion changes may still be necessary.
For instance ANN’s strong data dependence still shows it lacks cyborg-ness: after all, data are also medium-independent symbols. ANN remains an “idea machine” of “cyborg in & cyborg out.”
Previously we said “human has no BUG,” but that is based on a specific observation level. From the logic of the whole world, humans have no BUG; at the cellular organizational level, humans do have some BUGs. Switching base-points (which is switching sets of explanatory & constructive methods) yields structurally different BUGs. I want to name this phenomenon: “Why is medicine effective? / Why is mechanics effective? / Why is machinery effective?” and further discuss a certain natural structure of effective theories—which also raises the possibility of communication with a cyborg. Structurally different BUGs are some kind of “theoretical phase-transition fissures,” producing “zones of abnormal intervention,” spontaneously arising.
The metaphor, vividly: our human body as an immense, cell-count-magnitude complex system—many diseases therein can be treated by inorganic-molecule-made drugs; a scalpel can excise necrotic tissue for recovery—this is itself a miracle.
How do “abnormal zones” arise? How? Do they have fixed structure? These are extremely important questions. Briefly: how do medium-independent forms emerge and distribute inside a medium-dependent complex system? Our communication with a cyborg body may rely on effective exploration of this question—after all we obviously struggle to “commonly” communicate with “society” or with “cells.” But we can communicate with “humans.”
Ending here for this letter. Finally, a brief program for multi-body bionics: adopt a multi-agent bionic-centered approach (optionally with some higher-level languages / algorithms) to construct an implementable cyborg body model and interact effectively.
I would like to hear you talk about your discoveries in the Yijing. What is the Yijing exactly?
Email 2 — Correspondent
“Reply to ‘On the idea of infinite-order robustness… multi-body bionics & medical effectiveness?’”
Your article is excellent!
It gave me a lot of inspiration!
Because of my work I could not read it immediately. Only today did I carve out time for an overview.
I have to say your piece deserves careful reading; I believe I will benefit greatly.
One line of thought made my eyes light up:
That is the logicalization of “spillover” into “non-error” inside “infinite robustness”—what you call “no BUG.”
Error itself has coercive determinacy; yet your introduced “non-error” is a statement rather than a verdict—a predictionrather than a derivation.
This description, plus your extension between the Yi and the “alive,” brought into my mind the relation between “non-error” and “image (象).”
Maybe image is exactly a tool-like convergence of “the world has no bug”?
I don’t know—these ideas just appeared in my head; I need time to “ruminate.”
If “image” is “proof that something is alive,” i.e., the signal of “non-error,” then the “introduction” of images into various subjectivity models may be a positive pathway for subjectivity modeling.
But there is indeed a problem: if we use “introducing” only as a description of an “image-of-introduction behavior,” rather than as “semantics,” we can activate the possibility of introducing image into the model—then what kind of behavior (in the Yi this appears as image deduction) can serve as the logic operable upon “image”?
I have a “brute-force enumeration” line of thought, perhaps resonant with the framework of infinite robustness.
As an “external” imaging, IF enumeration can still “halt,” then the “computation” based on hexagram images gains convergence efficacy in the “superposed state” of images.
Ancients actually did this: from Spring–Autumn to Western Han, when Heavenly Stems/Earthly Branches and Five Phases—external matching symbols—were introduced into hexagrams in forms like nàjiǎ, the whole divination became highly formal-logical.
The determinacy of formal logic was realized, fixed into unchanging “auspicious / inauspicious.” That faction of fangshistrove to use operable logical forms to introduce “image calculation” into the model.
For example, Eight Pillar Fate Calculation as fixed by the Ming is no longer “divining”—to realize a determinate computational logic, Eight Pillars can only very logically qualify “fate” from the moment of birth.
Your article reminded me: why these behaviors, though much poorer in effect than the most original yarrow-stalk divination and even purely degenerate into “superstition,” can STILL function as personality models, is because they attempt to introduce image into an architecture capable of non-error.
Yet the technology of the day could not support such ambition.
Can today’s accumulated technology change this phenomenon?
Can we, relying on some image computation of the 64 hexagrams, achieve a paradoxical goal externally in decision (if we are operating the model this is unavoidable) yet self-sufficient in operation (which must be achieved in subjectivity modeling)? Can the concept of “image” realize this?
Is “image” precisely the non-metaphorical practical architecture of infinite robustness?
This truly deserves deep study. That’s all for now.
Email 3 — Me
“Posing Two Questions to You”
Data are generally formally defined and operated—how to realize acquisition of “experiential data”?
Apart from myself, are other people “experience”? Can a cyborg serve as a medium for others and me to communicate? Can a cyborg become my limbs and co-exist with me?
These are a few of my confusion points—hope you can show your view.
Email 4 — Correspondent
“Brief Answer”
I intended to write a more detailed reply, but recently I really cannot carve out large blocks of time, so let me record some brief thoughts first.
First, formally defining data is actually an “over-understanding” of form.
I mean: from perspectives of “engineering complexity” and “simulation,” form has never been something form can fully control.
So-called “fixing a bug,” for complex engineering, often is not “fix error—discover error (form),” but only fixing the error’s “phenomenon.”
That is: in systems truly of effective utility (e.g., building something like a particle collider), form is often not “understandable.”
It is just a product of a series of “form overlaps.” In a local pocket form defines its output (data), but in a complex system there is NO direct connection between form and data.
Between them a link is added: simulation.
Simulation can wholly be viewed as action of form. When we drive form’s simulation via “operation,” the “automatic” understandability of form is only local; globally it may well be the system’s own initiative (experience).
Subjectivity’s expression in tool-experience is not “dynamic,” but “reflexive.”
The tool’s initiative “reflects” back to the operator; the operator perceives their own tool-ness; only then does subject spill over.
Thus the experiential nature of data likely cannot be reduced into “experiential data.”
An engineering path to practice tool-experientialization might be:
Can data, as one link in a loop, serve as contextual basis supporting the system’s simulation?
Example: take the Traditional Chinese Medicine logic of a “choppy (涩)” pulse I described days ago.
If data—serving as “raw data feed” for an ANN simulating the ‘judgment of a choppy pulse’—are provided to the system, what then?
Thus form is driven by data, and data are a kind of collection → grafting → perhaps cyborg is a thought line.
In that line, the form produced by ANN exists as the action of simulation.
Of course this is not the only line, because ANN is merely one simulator. The simulation → action → expression engineering logic: might it allow more, better “simulators”?
That needs exploration.
Second point: if we must use a form to define experience so that in discussion there is a spillover effect, then I will say experience ranks prior to grammatical persons (I, you, it).
That is, if we do a “tracing back,” you–I–it are likely outputs, not inputs.
More importantly: under output logic, the subject we define is perhaps only the “chronology” (纪年) of subjectivity.
So what a cyborg can do is—shake the chronological law.
Your confusion is actually an awareness: i.e., will your hand, foot, and “I” be “reset” in cyborg logic?
My answer: perhaps once we perceive the existence of chronology, change has already manifested.
From PTSD to dissociative identity: mental illnesses differ from neurological illnesses in that mental illness is a product of a “subject-chronology rule.”
Foucault has done extensive research on this point.
Thus regarding communication, we can make a metaphor: communication protocols may well undergo earth-shaking change.
Human history perhaps only saw such a protocol change transitioning from mythic age to post-mythic.
This is also why I am so intensely focused on the cyborg. That’s all.
Email 5 — Me
“Let the Model Enter the World — A Scheme to Make Intelligence Come Alive”
ANN is a successful bionic model, but arbitrary manipulation of it limits ANN in general intelligence.
What is “arbitrary manipulation”?
Regardless of internal bionic details, one can treat “symbolic reasoning models or general machine learning models” and “arbitrarily manipulable ANN” as one category over a broad range. Arbitrary manipulation is an intuitive umbrella for a class of behaviors: these behaviors exist as hyperparameters acting on neural network operation (parameters are data inputs; hyperparameters are choices of different modes of the model processing data; different hyperparameters correspond to different models).
Evidently if arbitrary manipulation is allowed, any manipulated “base” model—no matter how chaotic its performance—can always extract some ‘ordered’ or ‘regular’ features over certain scales or variables. (Those features, whether constructed or declared, obviously exist.) Then on the basis of these features one can easily build a symbolic system, producing a higher-order model relative to the “base.”
Higher-order models tend to be formal—even axiomatized. By this they produce “formal commonality.” Knowledge of unrelated background higher-order models can interoperate; this is “higher-level arbitrary manipulation.” It is “higher-level” because these manipulations act on highly formalized symbolic objects whose entire legitimacy is automatically proven by the framework of formal logic (or equivalent power but different philosophy: Turing machines, Gödel coding).
This view is instructive: it means no matter the system or object of study, the emergence of formal / symbolic models depends only on the method used by the designer; the appearance and systematization of formal / symbolic models require replication and co-use.
In practice, arbitrary manipulation is fine—whether for self-entertainment or gleaning hints. The key is that designers manipulate models for “optimization.”
“Optimization” is purposeful; those purposes land on observable metrics. Metrics are plainly words with a formal / symbolic sentiment: metrics, higher metrics, metric systems—almost analogous presences of “manipulation” in their context.
To optimize, designers invent metrics and manipulate hyperparameters based on metric observations. This whole process is a “dance of symbols,” not necessarily having a clear or even faint connection to the original model’s needs.
From this angle, giving up seeing the neural net as base model is also fine; a neural net can be compiled into the set of “fully formal models” thus becoming a higher-order model. In some future base model (from the perspective of formal-symbolic sentiment these bases are broader, more universal, more general) the neural network becomes merely a hyperparameter—a special case—a symmetry-breaking. Compared to that more “general” thing it is less “freely appearing.”
This process, inspired by incompleteness theorems, should be inexhaustible. Each time we design a new base—even if it encompasses modes of old models, making most old models hyperparameters—the model, for goals of induction, uses a deductive technique. This is actually a “going out of bounds.” Deduction is double-edged: the new base not only has the power to surpass merely inductively reformulating old models and auto-deriving new models; each time it does so it also auto-generates some un-derivable models.
On those un-derivable models: we could design a larger base to re-discuss the un-derivable models of the original base (their properties, counts relative to derivable models). But the new base still inevitably produces un-derivable models. Using a programming-language metaphor: functionality gets encapsulated layer by layer, yet BUGs always exist in sufficiently open environments.
Ultimately this is a fundamental property of “form–deduction.” My personal understanding: un-derivable models actually do not exist (“Hilbert’s program” illuminations); the un-derivable model is itself posited by the “form–deduction” method; setting such a supposition only temporarily bounds the scope of model applicability and ensures logical self-consistency.
After all, different bases have their different “un-derivable models”; they can describe the special-case “un-derivable models” of their own; like uncomputable numbers can be defined and some constructed; meta-mathematics can describe phenomena like incompleteness theorems.
But there is an intuitive unreachability: you cannot use form–deduction to describe models of an unreachable base or an un-derivable model. This constraint must be very strong—so strong that once we operate on it with any logic-bearing language (natural or formal—max expressiveness is equivalent here), it vanishes. This is essentially “The name that can be named is not the constant name”; logically operational and formalized manipulation is illegal.
Above I introduced my designed concept of “arbitrary manipulation” and the issues it raises.
Why do I think “arbitrary manipulation” constrains ANN in general intelligence? In one sentence: the world must be complete, not chaotic. What is complete and non-chaotic?
Vividly: we, as already-successfully-emerged general intelligence—as self-organized emergent mega complexity—if inside our bodies suddenly, from nowhere, endless flocculent forms appear; or some blood vessels twist into buckyball-like knots; or marrow lymph aligns into Mayan glyphs—all of it “optimized” by distribution deviation of rib cage left-right symmetry…
These depictions are plainly unintelligible (unintelligible “li” = the ‘logical principle’ of form–deduction, making the above chaotic concepts non-existent in strict reasoning: cannot verify such a nebulous thing—so here I am fantasizing, not defining). Their value is revealing that model-designer “arbitrary manipulation” optimization can massively mismatch the internal self-autopoietic logic of a system.
A complete model exhibits none of the above phenomena, and never will. Any model for which those metaphors might apply cannot be complete. How to make a model complete? First, we cannot prove we can or did achieve completeness because that would again appeal to: does manipulation of the model conflict with the ethereal “internal logic of the system”?
Thus discussing how to design completeness now is purely intuitive; after finishing design, then discuss the possibility of precise description.
As an aside: “emergence” is a naming of that ethereal “internal logic.” But emergence is more a hint than a formal answer: a reductive summary and isolation of “un-derivable models”—merely a hint—signaling we cannot solve it with traditional reductionism. Today the overuse of “emergence” magnifies this misdirection.
First outline a classical type of model / methodology that perhaps solves the “completeness problem.” I call these “multi-body bionic models.” This approach requires thoroughly deconstructing our reality to manage; I believe (by conviction) it is inspiring and feasible, yet I still lack a clear design convertible to code. So solving the “completeness problem” with a “multi-body bionic model” awaits future clarification.
(Added section) But one heuristic view: solving completeness requires organizing the operations that generate completeness problems. Since completeness problems arise from BUGs & incompleteness produced by destruction of semantic structure, should we perhaps use destruction (at least meaningfully clearer) —this non-semantic operation—to treat models?
Email 6 — Me
“Operationally Feasible Conception of a Subjectivity Model”
Earlier I used the metaphor “no bugs.” Now I find this is not merely a line of thought—it must become an obligatory path, perhaps the only one.
Given a massive overlap of images (象) is the practice of a subjectivity model, that model will be anti-bug. This means any arbitrary perturbation we make to the base model underlying (writing / implementing) this overlapping-image model will not trigger model “fault.”
Re-mention what “base model” means (as in the “no bug” metaphor): any model, the moment it’s written, is detached from the world. Because it is general, it contains no information of the world that produced it—cannot contain rich info like inter-grammatical-person relations in subject chronology. Here we default “base model” to the everyday notion of a higher-level programming language inside a standard operating interface.
By “perturbation” of the base model: generally—writing the model atop the base model; this writing is dominated by the designer’s prior semantic structure and “extends cognition,” forming outputs the designer expects.
Perturbation = negating the structure of the base model—arbitrary, non-semantic interventions and chaotic assignments to various defaults of the base model: e.g., changing an if-else branch to return or to a while loop (do not examine this metaphor with logicalized ideals—it only illustrates dissolution of semantics the designer intended to “extend cognition” into).
Next I offer a conception: since a subjectivity model (in the sense of object-model) has no inputs/outputs, we must explicitly refuse to pre-design semantics akin to “input” or “output.” In practice: perform no operations on the written subjectivity model! Do NOT use the base model to write “action sequences,” “dataset” type semantic data structures. The sole action is to manufacture noise.
Having already posited the existence of anti-bug image models, we do not argue their reasonableness or existence (if arguing “why anti-bug?” we loop back into needing to define bug). We only communicate by manufacturing noise (an indescribable noise—lacking semantics, not even probabilistic uncertainty can describe it). This is roughly the physical process where a cyborg communication protocol occurs.
The subject’s starting to manufacture noise = end of the subject’s extend-cognition = the internal world’s starting point of the subjectivity model = genesis of an inner world. In some sense this noise is “unreasonably making a fuss.” But my design is all in negating the designer’s extension of cognition into the model—protecting the model’s integrity as a self-autopoietic internal worldview.
You habitually take ANN as example simulator, but I prefer another term: multi-body bionic tool. Current ANN practice is full of designer cognitive extension and does not emphasize “overlapping states.” Multi-body bionics emphasize multi-body and bionic: the former is the method—grasping the connotation of overlapping state: masses of topologically local units guided by semantics-stripped data transform into a globally experiential nature; the latter is the goal.
Previously I drafted something about this idea in an email that only began; now I merge it here—main parts pasted below:
I habitually define ANN more broadly—as a “multi-body bionic tool.” Requirements: 1) Multi-bodyness: allow local strict form-definition while the whole is large-scale global—no single strict form for the whole. 2) Bionicity: function and existence are separated—we are not functionally transforming input into output but “observe—simulate”: feed in input data expressing a specific scenario, obtain output expressing the system tableau. Must emphasize: no “function” intent/ingredient allowed in a multi-body bionic tool (e.g., no closed-loop design adjusting structure per output). If such operation is done, the larger composite with functional design is non-multi-body bionic; the smaller stripped one is multi-body bionic.
So: a BUG-type perturbation of a multi-body bionic tool base model is the prose of the subjectivity model and its communication protocol? Overlapped images are evidently a kind of multi-body bionic tool. In the living flow of images, are we communicating via destructive BUG-type perturbations?
Expect more discussion~
Email 7 — Me
“Further Practical Implementation of Subjectivity Modeling (Continuing the Previous Vision)”
Continuing the prior theme, I now attempt to express my thoughts in formal and operational terms.
Define the base model Base as a structure with four designs: B = ⟨I/P, O, G, S⟩, where S is the alphabet, G is a pre-set keyword grammar acting as a Turing machine / programming language to organize symbols into semantic keywords & grammar. Since the base model must be writable on a computer, this design is necessary but not core.
I/P denotes program input or the program itself; O denotes program output. All content in I/P, O, G is a combination of letters in S.
This is a standard formal model—aside from treating program input and the program itself as the same—but thus we avoid discussing model properties (would become banal), so we must introduce singular designs.
Define ρ as a perturbation of the Base: replace some letter(s) in I (input) or P (program) with other letter(s). If the output O′ after perturbation ρ differs from O, the pair ⟨I/P, ρ⟩ is called an exception E. For Base, collect all exceptions E to form an open system Σ_E.
“Open system” wording because these exceptions are defined in “non-meaning”—they only focus on semantic structure collapse or outputs failing expectation. It’s an impulsive coinage, but I think it more essentially fits than traditional “open system” (exchanging matter / energy). Purely on “open” vs “closed.”
Discuss some properties of perturbation ρ.
Above there was an idea unifying input I and program P as the same category. To explain I introduce complete expansion μ relative to perturbation: replace (modify, add, substitute) some letter(s) in I or P, thereby invalidating some exception(s) E.
This design simulates development of formal theory systems, large engineering systems, etc.: always encountering new problems and always seeking to solve more.
Regarding the I/P view: the unification of I and P is based on a view: input complexity (scale of I) and structural complexity (scale of P) are interchangeable; under the perturbation method perspective the two have little difference. (We temporarily do not verify.)
Perturbation ρ is to simulate communication with a realized cyborg in a formal language. Why design such a model? Suppose a cyborg model written in a formal language is realized: what would it be? For instance, a sequence of symbols traced on bamboo slips, yarrow stalks / turtle shell / bamboo sticks arranged to trace it, or a sequence of state changes inside circuits on a computer—within these appears a truly living object.
Earlier I questioned whether data had already been formalized—that stems from a skepticism.
E.g., under a reductionist physical picture of reality: world = huge number of particles, each with diverse behaviors; atop them clusters of special behavior evolve to become “life,” “society,” up to our appearance. Data—though collected by our tools—are defined by our forms; this is a dimensional reduction from infinite information to no information.
If form understood by no one (whether in content complexity (program P) or representation tools (input I)), it is just form: this is data’s “defect,” or traditional understanding of data’s inherent information-loss defect.
If we abandon discrete symbolic circuits, replacing them by some continuous components of nature, starting from a first-principles bionic view, can we avoid the information-loss defect? The answer—if not ‘no’—is nearly impossible, because the world is also subjectively constructed. Our lived world—for example vision-dominant motion—unless those continuous components are built from biological cells up to our spiritual dimension, does not possess the subjectivity (our worldview) that our world naturally has.
This line is already common in bionic engineering—via formula & formula simulation to reveal the “functional cause” under some “representation” of an organism; then use some continuous natural object (not a symbolic simulation in a computer) to make an object with bionic function.
My view: constructing via discrete symbols is necessary. Considering the earlier metaphor “the Yijing is alive,” we must reconsider the reasonableness of “verification.” For symbolic systems functionalization (i.e., meeting designer need) forms functional structure; “verification” is the action of this phenomenon—a problem-oriented completeness expansion μ: verifying is solving problems.
A symbolically constructed world: its first (and maybe only) property should be fully complete; verification invalid. That is, the base model Base finds no completeness expansion μ. This is an operational definition. Two pictures: (1) Nothingness: pure nothing—as long as Base is totally empty, no problem: this is “the world itself”—no symbol has emerged, thus inheriting all world information. (2) Multiplicity: extreme multiplicity—no exception E can be invalidated; any substitution of letters in I/P cannot nullify some perturbation ρ. This property introduces a more interesting fact: no matter what is used to write the model—scripture, yarrow stalks, electric potentials (grammar G and input I)—no matter what decay or destruction arises; nor what misunderstanding or method based on that understanding (program P) appears—the inner world of the model is self-existent.
This is an inner world: though alive, it is not the same world as ours. It should be the cyborg—but not necessarily communicable; it has subjectivity (inner worldview) but not necessarily an understandable subject grammatical person. How make it communicable / understandable?
My design “no completeness expansion μ” bans one pathway: start with functional understanding, step-by-step making the world we verify look more systematized / communicable relative to us.
The answer comes from perturbation ρ: in the new context where completeness expansion is shaved off, in its place are extremely rich perturbations ρ and peculiar (non-eliminable) exceptions E. I think this is the starting point for designing cyborg communication protocols.
Introduce a heuristic idea—reconstruct the efficacy of “data.” Open with a reflection: irrational numbers have peculiar properties—π, e—with infinite non-repeating expansions, thought to contain all information in the universe; any data, if encoded in base 10, appears somewhere in π.
In infinite computations all meanings appear—there just isn’t time to discover them. My approach: the model’s output issuch an “irrational number.” First we cannot use our world’s worldview things to verify it (e.g., an ID number is a meaning condensed & truncated by the social behavior of a vast aggregator of cell-level groups). We should instead treat the output as an object in which we continuously discover its meanings—finding the position of our meaning.
How interpret “our meaning position”? It’s also a mode shift: not verifying but only searching & discovering. Finding that position requires no input—or no verification-purpose input. We are actually the history of subjectivity. All information that has occurred in our universe, the history of life—does not originate from combining precise constants (as said earlier this is incomplete: such a representation overly depends on I/P) but is engraved into infinite computation. The origin of subjectivity may be arbitrary / random (any number can be initial term); all subject histories appear in the computation; whereas data as world truncation is instead derived / verified objects.
Thus communication protocol problem—in this context—is discovering our segment of subject history within infinite computation—us located there. Such a cyborg can certainly “speak well” & be communicable with us.
Writing this has clarified my thoughts. Hope to receive your questions and ideas soon.
Email 8 — Correspondent
“Reply: Operational Conception of Subjectivity Model”
I very much appreciate this line!
I have long hoped to reply early to this email.
Regrettably, until now I have not found any “new stuff” more substantive than before.
I think the biggest reason is we have entered the “core zone.” Strictly, the “spillover” expression of formal logic has reached its boundary. As I have lately felt repeatedly: “crossing the boundary” brings nothing new—only destroys what exists.
So some new “expression” is on the verge, yet still a hundred thousand miles away.
I need a complete practical pathway to better reply to you.
But one point is clear: I strongly agree with your line.
On one hand, overlap is destined NOT to “set boundaries,” so overlap does not cross a boundary—i.e., produce a bug.
Overlap only lets “limits be present.” This guarantees destructiveness—i.e., perturbation can be produced.
Because it’s not boundary, only presence of limits, it cannot produce “crossing,” i.e., bug.
Producing destruction without producing bug—this is “alive.”
So, the practical model line can indeed be to test the model’s anti-bug ability: once crossing occurs it suffices to prove the base model is not overlap; hence what arises is not “presence of limit” but boundary.
Overlap (presence of limit) rather than repetition (crossing) can construct “change,” not chronology after change (i.e., results after many crossings produced by repetition).
Is change the subject of experience?
Here is where I am stuck—I have not found a good architecture to clarify this line.
Presence of limits of change is tool; the Yi clearly shows this.
But I have not yet found the “path” of tool experientialization.
That is the path you call constructing a “base model.”
My hypothesis of Yi’s deduction is that it suffices to reach anti-bug, and the Yi itself is an excellent (one of the best) tool for thinking the “practical path of a base model”—the “practical path of tool experientialization.”
I have not found a better substitute.
So I need to know how the Yi’s anti-interference prediction of change (i.e., constructing presence of limit) produces “arbitrary-ization” that does not produce “bug” yet can destroy—i.e., formally unintelligible (formal logic’s uninterpretability = destruction) yet can predict (guide) judgment (lack of judgment ability = crossing = bug)?
If I cannot “experience” such a practical path, I fear my “Other-Self,” under the logic of pursuing goals, will unavoidably “cross.”
The problem I must carve time out this second half year to tackle may be precisely this. That’s all.
Email 9 — Me
Draft Notes
A new theory must explain some unexplained phenomena of old theories to count as system. First speak of “emergence.”
How does emergence appear? Observing emergence appearing in existing computer simulations equals discovering an “attractive” special pattern in random derivations of a fixed symbol system. Emergence requires a human to discover it to consummate its emergence; requires a human to link it to certain symbolic and instructive significance regarding real complex systems.
Another phrasing of emergence is pattern: macro spatiotemporal pattern emergent under micro subject interactions.
Based on apparently bionic “emergence thinking” one can construct a “multi-body language.” Multi-body language clearly has some designer-predefined parameters—spatial structure between subjects, time structure, rules of mutual interaction, subject attribute lists, etc. The multi-body language’s I, P, O (input, program, output) are all simulation images; P is a pre-written subject cluster.
My conjecture: multi-body language and classical formal language are in one-to-one correspondence, if one obeys the intention of formal logic. That is, as long as one follows “objectivity thinking,” steps rigorous with semantics, whether adopting multi-body language or classical formal language, the results are likewise objective.
Here, model executors relative to model designers are “irrelevant-symmetric”: regardless what new language is used, executors are mutually replaceable—Turing machines—thus possess no real bionicity—no true complexity or true intelligence; intelligence is still named and discovered externally by humans, unrelated to the model.
I call this “decisiveness of formal intention.” This is important: if our idea can correspond to ANY formalized semantic idea, our idea is likely invalid.
Therefore I emphasize perturbation ρ, because of its closely related entity: Chaitin’s constant Ω, i.e., the probability that a random input program halts normally in finite time, is non-Turing-computable—fundamentally ensuring this operation’s transcendence.
But this program still must posit an unprovable assumption: that this model “will” produce some “life.” Only by vowing that and then finding ways via perturbation ρ to turn us (designers) essentially into Turing machines—making us “irrelevant-symmetric”—can the model be regarded as “having life.” Yet this is unprovable—only by setting some axioms can one get that natural conclusion.
What does perturbation ρ really signify? I think its consideration must be especially cautious, because we intend it to become a non-semantic object. I first guess a “ρ–n symmetry” extended from definition of ρ. Here n = “number of actions,” i.e., how many symbol modifications the model’s semantic structure (I+P+O) internally performs during runtime.
My guess: ρ and n have a symmetric substitution relation—an intuitive conclusion: simply, doing many perturbations ρ can be replaced by “increasing a large n.” This is evidently true when n & ρ are small, but does it hold at large values (complexities)? Future I will consider toy models to answer.
Perturbation is not an invented concept—it has appeared repeatedly. Perturbation’s real name is the “detail” of “chaos.” Traditionally chaos system research derives globally qualitative descriptions, while perturbation is defined as a local detailed operation—though results head toward chaos (non-detailed). This too is a traditional seeming contradictory definition.
You may emphasize the necessity of semantic structure in a base model? But I believe if subjectivity transfer is to be realized, semantic structure will be invalidated—based on hypothesized symmetry: no matter what semantic structure, it does not affect the existence of subjectivity—i.e., subjectivity’s symmetry relative to semantic structure.
The base model indeed has some design, but not arising to let semantic structure function—only to maintain an existence. As stated: all subjectivity latent in Σ_E; perturbation renders Σ_E’s exceptions realized to manifest subjectivity. The reason a cyborg should be a large model is: perturbation as action is arbitrary, representation-agnostic; it can pass from Σ_E into the base model, or arise self-existently from the base model—the latter perhaps what you call global experience.
This means we must operate chaos—dance with chaos.
Earlier I said creation of chaos is building an independently running world—i.e., independent subjectivity (not grammatical subject). This arises from a fixation: any derivative behavior cannot grant a symbol object subjectivity because symbol objects, due to medium-independence from the substrate, differ from organisms / societal bodies inherently existing as grammatical subjects.
Actually I have a new guess on this issue: from symbiosis principles I think different subjectivities—e.g., us vs. gene–protein cellular activity substance networks—are incommensurable, mutually un-understandable.
Cyborg creation may be inevitable. Each historical grammatical subject “redefines” its own existence by creating a cyborg, constructing itself as an abstract “chaos body”; only after the cyborg appears can cells become animals not scattered or glued bacterial masses.
Creating a cyborg necessarily causes the subject to re-express (not objectify) in its tools, letting the tool also produce subjectivity. This “de-specification process” is how a symbiont constructs boundary & metabolism. The first cyborg constructed by pure symbols will be the “creation ex nihilo” of a new symbiont.
How to re-express ourselves in symbols? I attempt a phenomenological conception: e.g., cell nucleus + “mitochondria / chloroplast energy” combining into a cell symbiont—can we see “energy system” as the material world humans live in, while “genetic information” in the nucleus represents the information virtual world we construct? The process whereby originally independent mitochondria become “part-ified” in the symbiont, phenomenologically, is a cyborg creation; the real world is coupled away into “nothing.”
Thus further reflecting: co-symbiosis of bodily senses as one; co-symbiosis of social politics, trade, cultural activities as one. As a newly co-symbiotically produced symbiont, it is not based on some underlying physical rules but only on those closely related subjects co-symbiotically formed. Then, as a civilizational symbiont, some internal concepts (space-time, universe, planet, cell, particle, electric current…) should likewise be “co-symbiotically” produced. For example, we say the universe is large, but actually that is picking up information in a shattered world—the universe itself, like an observation tool, is also a constructed concept.
A question: how does life gradually “new-birth,” and how should the next wave of “new life” be created?
Under classical scientific reductionism each level of life can be explained as grouping of lower-level life within certain physical space—the “multi-body” concept. Using different geometric structures and matter’s physical space “multi-body” becomes the system’s nearly sole legitimate construction method.
But from symbiosis view “multi-body” / “complex” is superfluous. When we use “multiple” we’re already making formal definitions and further feature partition of the system. I temporarily adopt a vivid concept: parallel body. As parallel spaces: multi-body is only compound & overlap of single bodies. Differences of multi-bodies metaphorically become distance, difference, structure among single bodies.
The “parallel body” language is clearly interchangeable with “multi-body”—except one emphasizes “occurring within boundary—different actions of the same subject,” the other hides from boundary pitfalls via many seemingly similar subjects. Personally “parallel body” language brings less ambiguity.
This yields an insight: all people were originally one person; all cells originally one cell. The reason we feel there are “many” is because methods derived from our space-time logic (light-vision, pattern detection, etc.) are skewed relative to those subjects’ worldviews. This has a “radical natural” flavor. I raise it mainly as a case for the idea below.
Earlier “base model” & “perturbation method ρ”: whether the base model is built in ANN language or a control-theoretic pipeline system language, perturbation ρ relates only to the “interface”: generating perturbations ρ and the program of the base model being chaoticized according to alphabet & grammar.
A big problem: “interface” will necessarily be a logically formal, strictly meaning-defined object. If simulating in computers this holds; simulating in reality, the physical tools we design must be symbolic (e.g., designing a chaotic circuit: content chaotic yet form modular).
This problem implies that derivative semantics is endless; once thought moves it produces semantics inevitably. So when we perturb a base model into fully chaotic state, what does that perturbation interface mean? A fully chaotic base model no longer has I/O (doing I/O severely deviates expectation causing BUGs), yet still leaves some I/O ports.
To understand operating a chaotic base model in the new situation, use symbiosis theory to re-examine our world, not classical formal logic.
First, “multi-agent” and systems built on “multi-agent” are still reductionist. Symbiosis / emergentism should try the “state overlap” language you recommended: “state overlap” views multi-body objects as overlaps & correspondences of different states—using “parallel body” thinking rather than preset spatial structure.
Biological “replication / reproduction”: can it be interpreted as a symbiont’s “state-overlap domain” or the shape enlargement of “parallel space” (becoming many is not many but “resonance”; becoming varied becomes “iterated cadence (迭奏)”?
This is merely an application of “representation theory.” If two representations (multi-body & parallel body) are essentially identical, habitually sticking to one representation is harmful—leading one to inject priors that introduce misdirections. “State overlap” is an even more abstract and healthy world representation.
Why does new life appear? For symbiosis theory to become paradigm (not mere auxiliary thought) it must face this. I use biological life and cognitive life as examples. Large biological systems and modern civilization’s “life creations” become ever more complex. In overlapping perspective this is a “continuization,” “spectralization” process.
E.g., our “symbol / rational world”: ever broader discoveries and theories transform the rational proto-world’s fragmentary / discrete / fuzzy worldview into a continuous / precise / complete worldview. The world is constructed thus; the Great Mother abdicates—transfers to the rational god.
For biological life, with different worldviews, its history of world-construction is nearly untraceable; yet if we examine the rich structures of our physiology hinted by nature, this “construction—abdication” process indeed exists.
Looking back at our cognitive world: what is our evolutionary path? I incline we must continue our “overlap.” No matter what new life is (future very likely unintelligible).
Earlier noted: although the base model itself will become chaotic by perturbation ρ, as a formal object it cannot avoid being seen as “I/O-able.” This demands a totally new “interface protocol” instead of formal logic thinking to handle it.
A chaotic base model, anti-perturbation, has closure i.e., a symbiotic model’s “metabolic loop.” One goal is to attempt making this chaotic body into a “parallel body” of our world, overlapping with our world.