Design World Embedding Environments and Symbiotic Protocols

2021-11-01 chapter

The Duality of Entropy and Robustness under the Reference Frame of Break

Entropy is the logarithm of the number of possible microscopic states, expressing the uncertainty of a system, while robustness is the capacity to accommodate fluctuation values in the input spectrum that deviate from normal under open environments.

Closed systems are derived from open systems under given observations. Open systems are aggregations of anomalous objects (non-discussible). Everything that can be anticipated is closed.

Let the base model be B, including (I/P, O, L, S), where I is the input field, O is the output field, P is a semantic processor defined by several keywords, L is the model language including keywords and organizational grammar, and S is the symbol set. Fields in I/O and keywords in L must all be represented as combinations of symbols from the symbol set.

Define pB as a small perturbation of base model B, replacing one basic symbol in either field I or P with another symbol from S. Define ΣpB as a collection of several simultaneous, non-conflicting small perturbation behaviors—essentially basic perturbations of several symbols in the I or P fields.

If perturbation pB causes the O field to change compared to its state without pB, then the binary tuple (I/P, pB) is called an anomaly E. The aggregation of all anomalies E forms ΣE, called the open system (this corresponds to open systems from a closed-theory perspective but is derived through a different method).

This open system definition is strictly limited to |ΣE| ≥ natural number cardinality. The minimum number of symbol changes required to execute perturbation pB is called ΔI/P, and the minimum number of symbol changes in the O field caused by perturbation pB (compared to the unchanged state, calculated as the minimum transformation count needed) is ΔO.

I/P are not distinguished—they are actually semantic structures with information entropy > 0, just based on different representations. I can actually be represented as generated by some P, and P can actually be represented as fitted from a very simple P and complex data I, so I/P are classified together as semantic structures.

Complete expansion ceB: An operation replacing P with P' or I with I' such that |ΣE'| < |ΣE|. Complete expansion sequence ΣceB is a series of complete expansions ceB making |ΣE'| << |ΣE|.

Now I present some toy models to illustrate the relationships between these definitions. Assume toy model ①, where P is an identity mapping making O=I, with all information in I. I has n characters, and perturbing any 1 of these n characters makes the new O' different from O—all are anomalies E.

Assume the infinitely large, infinitely layered real world is the I/P within the constructed internal model B. All objects in the real world are represented as some logic within the symbols constructed by the internal model. Then perturbation pB undoubtedly cannot disrupt this property. This is a form of conservatism. Initiating perturbation pB must follow some logic.

Base model Base: I understand this as a structure with four designs, i.e., B=<I/P,O,G,S>, where S is the alphabet set, G is the predetermined keyword grammar that functions as a Turing machine or programming language to organize symbols into semantic keywords and grammar. Since the base model must be writable on computers, this design is necessary but not core. I/P represents program input or the program itself, O represents program output. All content in I/P, O, G is represented as combinations of letters from S.

This is a standard formal model—except for viewing program input and the program itself as the same thing. But precisely because of this, I won't discuss the properties of this model, as such discussion would still become mundane. Therefore, a singular design must be introduced.

Define ρ as a perturbation of the Base model: replacing some letter(s) in I (program input) or P (program itself) with other letters. If the output O' after perturbation ρ differs from O, then the binary tuple <I/P,ρ> is called an anomaly E. For base model Base, the aggregation of all anomalies E on Base is called the open system ΣE.

The term "open system" stems from these anomalies being defined at "meaningless" places—it only focuses on the collapse of semantic structures or outputs not meeting expectations. This terminology is purely from a momentary association, but I believe it captures the essence more than traditional "open systems" (exchanging matter and energy with the environment), at least regarding the words "open" and its corresponding "closed."

Let me first discuss several important properties regarding perturbation ρ.

The earlier idea of classifying input I and program P as the same thing—to explain this concept, I introduce a design: complete expansion μ. Complete expansion is relative to perturbation: replacing (modifying, adding, substituting) some letter(s) in I (program input) or P (program itself) with other letters, making certain anomalies E ineffective. This design actually simulates formal theoretical systems, large engineering systems, and other objects whose development process involves constantly encountering new problems and always seeking solutions to more problems.

Incidentally, explaining the model's I/P concept: the design of homogenizing I and P is based on an idea that input complexity (I's magnitude) and structural complexity (P's magnitude) are interchangeable—from the perturbation method's perspective, there's no significant difference between them. This concept won't be verified for now.

Perturbation ρ aims to simulate communication with cyborgs in realized situations. Here I'll explain why such a model needs to be designed. Suppose a cyborg model written in formal language is realized—what should it be like? For instance, symbols outlined one by one on bamboo slips, milfoil stalks and turtle shells arranged, or bits represented in computer circuit groups—from these emerges truly living objects in the real sense.

Previously, I questioned whether data has been formalized, actually based on a suspicion.

For example, regarding the reductionist physical picture of the real world, this world has extremely massive numbers of particles with various behaviors. Above this, certain specific behavioral clusters evolve into so-called life and society, until our emergence. Although data is collected by our tools, it's defined by our forms. This involves an infinite-to-zero information dimensionality reduction process. Forms without understanding (whether from content complexity (program P) or representation tools (input I)) are just forms—this is data's "flaw," or rather, the information loss defect inherent in traditionally understood data.

If we abandon discrete symbolic circuits and use certain continuous components from nature, can we avoid information loss defects from a first-principles biomimetic perspective? The answer is not impossible, but nearly so, because the world is also subjectively constructed. The world we live in, such as vision-dominated movement, unless continuous components are built from biological cells up to our spiritual matter, otherwise lacks the subjectivity (our worldview) naturally possessed by our world. We can even see this approach has appeared extensively in bioengineering—revealing so-called "functional causes" of organisms under certain "representations" through formula simulations, then using certain continuous objects existing in nature (not simulated in computer symbols) to create objects with biomimetic functions.

My view is that the discrete symbolic construction approach is necessary. Considering the metaphor "I Ching is alive" mentioned earlier, I believe we need to reconsider the rationality of "verification" as an action. For symbolic systems, functionalization (i.e., according to model designer needs) creates functionalized structures. "Verification" is the action of this phenomenon—problem-oriented complete expansion μ: verification is for solving problems.

The world constructed by symbols should have one primary, possibly only property: sufficient completeness, verification ineffectiveness. That is, base model Base cannot find complete expansion μ. This is an operational definition with two scenarios: first is nothingness—pure nothingness. As long as base model Base is completely empty, there's no problem. This case is "the world itself"—no symbols have appeared, thus completely carrying all the world's information. Second is abundance—extreme abundance. No anomaly E can be invalidated; any replacement of letters in I/P cannot invalidate certain perturbations ρ.

This property leads to more interesting facts: regardless of what's used to write the model—scriptures, milfoil stalks, electric potential (grammar G and input I), whatever destruction or corruption occurs, or whatever errors occur in the modeler's understanding of the model or methods constructed based on such understanding (program P), the world within the model remains autonomous.

This is an internal world. The term "internal world" suggests that although it's alive, it's not the same world as ours. It should be a cyborg, but not necessarily communicable, since although it has subjectivity (internal worldview), it's not necessarily an understandable subject. So how to make it communicable and understandable? My designed "non-existence of complete expansion μ" essentially prohibits one of our paths: starting with functional understanding, step by step making the world we verify appear more systematized and communicable with us. The answer comes from perturbation ρ—in the new context, although complete expansion is eliminated, it's replaced by extremely rich perturbations ρ and anomalies E with peculiar properties (non-eliminability). I believe this is the starting point for designing cyborg communication protocols.

Here I present a heuristic idea—wanting to reconstruct the utility of "data." Starting with a reflection: irrational numbers have peculiar properties, such as π and e, with infinite non-repeating cycles, believed to contain all information in the universe. Any data, if encoded decimally, will appear in π's content. In infinite calculations, all meanings appear—just not enough time to discover them one by one. My approach is that the model's output is such an "irrational number." We first cannot use things from our worldview to verify it. For instance, ID numbers are meanings specified by large cell groups formed by molecular bonding in their social behaviors—a concentration and truncation of information from our world. We should treat the output as an object within which we continuously discover its meaning, seeking where our meaning is located.

How to interpret this location where our meaning resides? It's also a transformation I propose: not to verify but only to seek and discover. Finding this location requires no input, or rather, no verification-purposeful input. We are actually the history of subjectivity. All information occurring in our universe, the history of life, will not originate from combinations of certain precise constants—as mentioned earlier, this is incomplete because this representation of the world overly depends on I/P—but is carved in infinite calculations. The genesis of subjectivity is dispensable, random (any number can serve as the initial term of calculation). All subjects' histories will appear in calculations, while data as world truncation is instead a derived, verified object.

Thus, the communication protocol problem, in this context, becomes discovering that segment of subjective history belonging to us in infinite calculations, and us located in that historical segment. Such a cyborg can necessarily "speak well" with us and be communicable.

From a robustness perspective, the internal world's robustness is undoubtedly infinite, but objects capable of formal description and worlds perceivable by senses must be renormalized finite-robustness "structural" objects. Such structures convey information about "worldviews."

As a typical example, chaotic functions are instances of base models. Assume chaotic functions are implementation tools rather than programs.

The edge of order and chaos is just a metaphor for limiting conditions. Actually, chaos might be rather subjective, hardly having an upper bound. The observable upper bound, the factually existing upper bound, is the critical zone between chaos and order. The so-called chaos further beyond is just defined but factually unreachable.