How to view complexity science?
Q: How to view complexity science? A: Complexity science emerged and developed through the extraction of various "novel" phenomena and "complex" commonalities in many specific fields. It was mainly formed in the 1970s and 1980s, and developed from the old three theories (systems theory, information theory, and control theory) and the new three theories (dissipative structure theory, hypercyclical theory, mutation theory, or synergy theory). Overall, the discipline is still in a primitive era, not as the title holder said, "domestic people are not yet particularly certain". This is already established and global. Returning to the question raised by the author, the first point is that 'the sum of individuals does not equal the whole, which is what complexity science is studying'. This sentence actually points to a core characteristic of complex system problems, which I believe is of paramount importance: emergence, or openness of content. The statement that the sum of individuals does not equal the whole is very appropriate for describing the complexity phenomenon of multi-agent systems. For example, it is used to represent certain macroscopic patterns that arise from the interaction of a group of subjects, where "whole" emerges and characterizes certain properties that distinguish it from individuals, rather than the "simple linear sum" of individuals. The concept of "simple linear sum" here is worth pondering. What is "simple linear sum"? Let me share my personal understanding that this term contains a certain historical context and refers to the analytical and mechanical modeling methods of reductionism that have been prevalent in the past and even today. The former refers to higher-level systems that can be represented as the basic units and their interactions of lower level systems. This representation is complete, meaning that as long as the lower level units are broad enough and their interaction descriptions are comprehensive enough, various phenomena of higher-level systems can be fully described; The latter is the process of organizing low-level units manually, assembling them into components or even a system according to certain predetermined sequential programs (I believe mechanical assembly lines are the best realistic counterpart to the abstract word 'linear'). As long as we have a clear understanding of the history of modern science and industrial development, it is not difficult to see that these two methods have been widely applied. These two problem-solving approaches are almost identical: mechanical modeling requires the development of reductionism that describes lower level matter well, while reductionism demonstrates its tremendous power by fully applying it through mechanical modeling. From today's perspective, as people today, we already know that complexity science has emerged, as a negation of this traditional way of the past, precisely because some inherent problems (with "complexity" commonalities) have been developed. So what are these problems? Let me give you an example that I believe is the most representative and timely: synthetic biology. Synthetic biology has made rapid progress in the new century, especially in the past decade. Starting from the most basic life parts, it has established layers of higher-level biological systems to perform artificial biological functions. A typical method is to design "gene circuits", such as using promoter enhancers, repressors, etc. to regulate gene expression, or using multiple such links to perform more sophisticated biological switches. However, synthetic biology has also clearly encountered the problem of "complexity" in recent years: https://zhuanlan.zhihu.com/p/22453582 (Guo Haotian: [Viewpoint] From Tradition to Construction - A Brief Discussion on the Functions and Research Paradigms of Synthetic Biology) The organisms that can be seen in daily life, from the perspective of reductionism, are an extremely large and terrifying system, with a huge number of units and degrees of freedom both in terms of cells and molecules. For the designed gene circuit, many other reaction processes may occur during actual testing, such as erroneous expression within the designed circuit or environmental induction of erroneous expression. Just to mention one thing about my understanding of this "complexity issue": this distinction between internal and environmental factors is not applicable in complex systems. Generally, to distinguish between internal and environmental factors, there needs to be clear "boundaries" within the system. However, the above distinction between internal and environmental factors is made by model designers out of design needs to distinguish "circuits" from the environment. After all, the main thing that model designers can change and adjust is "circuits". However, not every designed gene circuit may have clear boundaries at the gene expression level that can be distinguished from the environment. This unreasonable division brings some complexity, but only a part of it. There are countless complex issues similar to the ones mentioned above, and almost any modern basic natural or social science that you can imagine has important discussions or is greatly troubled by such problems. One of the main reasons for this dilemma, in my opinion, is that many models adopt the idea of constructing some basic units and predetermined basic behaviors for deduction, predicting and observing phenomena, and then evaluating the model's performance. This model framework is undoubtedly reductionist. Furthermore, in my opinion, it is inevitable for reductionist thinking to encounter complexity problems. reductionism can clearly construct a "complex" world with multiple bodies and degrees of freedom, which indeed provides a basic framework for the real world. It may seem "complete" or "perfect", but once such a world moves, it "breaks away" from the designer's control. Among them, there is a type of chaotic phenomenon that can be called the "butterfly effect", where the local state of the system deviates from the original predicted state with a very high probability in long-term evolution (most of the time, this deviation is also significant, if we consider the problem-solving needs and evaluation models of the system designer). The sentence is:; Another type of phenomenon with a higher degree of detachment, which can be called the "emergence mode", is almost completely out of the designer's control. For example, this is not the difference between "1" and "2", but the difference between "1" and "&6niai # hf", meaning that it is completely unexpected and difficult to evaluate. This is the truly difficult aspect of complex systems: emergence; Suddenly, Emergence, contrary to expectation. This question is the core of what I consider to be complexity problems, and also the opponent of what I consider to be complexity science - only with a high level of understanding and effective handling of such problems can complexity science truly 'mature'. Specifically speaking, when it comes to a certain engineering problem, using synthetic biology as an example, I believe that it can help synthetic biology understand, process, design, and intervene in giant and complex biological systems (note that it is not multi-body, but giant, give a small number, such as tens of billions (● ˇ∀ˇ●)). Of course, this may be a "lifetime" challenge, and if the goal is lowered to help design "large-scale bacterial community engineering", it will still not be simple. If it is at the theoretical level, it is necessary to fully understand the basic definitions of what a complex system is and its various properties, such as no longer using the large and inappropriate term "nonlinear", and having an abstract theoretical method and complex problem-solving process, rather than simply stacking various tools from other specific disciplines. Hawking said, 'The 21st century will be the century of complexity science.' I hope so, but before that, it should be 'the 21st century is already the century of complexity problems.'.