Can general artificial intelligence be achieved? How to achieve it?

2021-08-03 View External

Q: Can general artificial intelligence be achieved? How to achieve it? A: I consider the understanding and implementation of general intelligence as a subset of the problem of understanding large complex systems and their emergence. All existing objects that exhibit certain intelligent behaviors are large and complex systems, such as consciousness. At least from the perspective of reductionism, it is a large and complex system composed of a large number of diverse neurons and some chemical substances. The reductionist approach to implementing this complex system has indeed been hindered, for example, the knowledge base constructed using atoms composed of symbols according to syntax cannot make this artificial intelligence very robust or general enough. The view that "general artificial intelligence" is too distant under this issue is still too pessimistic, with phrases like "the basic direction is wrong". In my personal opinion, the exploration of scientific theories is not cumulative but directional, and the time cost of finding direction is necessary. Whether it is a success or a setback, one should focus on thinking and not be bound by fame and fortune to cope with it, and be content with it. Believing that the development direction of scientific theories is not as expected and that science is taking the wrong path may be a form of modern human behavioral art. Let me talk about my immature idea. The author of Algorithm Are Not Enough once talked about why existing deep learning algorithms are not universal enough. The reason is that although model designers let AI spontaneously randomly optimize which set of network hyperparameters is optimal, the model designers still choose the problem a priori and partially determine the representation of the problem. The process of selecting problems and their representations by humans is actually compressing the world into some simple semantic symbol strings, all of which belong to human thinking and are arbitrarily defined. If humans invent a model that only has symbol strings, manipulate the "input" symbol strings, and return to the world as a result, it also requires humans to understand the meaning of the "output" and execute it, or machines that symbolize human thinking to do so. In fact, AI is always in a world of no cause and no effect. AI is different from us who are actually large and complex systems. We are not a program that follows the typical Turing machine or von Neumann architecture, but we are most likely Turing achievable. A small issue that needs to be addressed here is the mnemonic defect. Simply put, a word can have multiple interpretations, and the laziness of human thinking may confuse some of these interpretations. I don't think we fit into the typical Turing machine, but Turing is indeed achievable. The reason is that Turing machines in the usual sense have clear algorithmic meanings. For example, a Turing machine that converts X to X+1. But the problem is that according to our usual understanding, using a Turing machine to achieve general intelligence must have clear steps, such as a transformation from X to F (X) for each step. However, the problem lies here. In the traditional sense, Turing is implementable, which proves that Turing machines can construct any algorithm model. However, this significance is limited to situations where semantics are clear. Algorithm designers would not think of transforming "X" into something meaningless like "&^ GJH *", and everything must be guided by the model designer's assumptions. Personally, I believe this is the key. In fact, promoting this example to more attempts to achieve general artificial intelligence, such as using the method of "causal inference+knowledge base", causality and its derivatives are mnemonics, inspiring people to think that our worldview and AI's worldview may not be the same. Therefore, designing such a set of tools to solve this worldview difference can make AI present universality. But the problem is that the concepts of "cause and effect" and their accompanying tool libraries and common sense libraries are just a clearer way invented by humans to explain the widely existing large and complex systems in the world. AI is just running our commands. In order to compensate for the deficiency of mnemonic symbols, let's reflect on this example. Assuming it is a "cause and effect+neural network" model designed by a human model designer, which allows a human clerk to perform calculations in a row, isn't the essence the same? This example replaces the executor AI with a human and finds that the result is still the same, which is a kind of "symmetry" in a certain sense, that is, replacing any executor with any object with universal processing ability, the result is the same. Therefore, intelligence is actually not related to the executor as a universal command, but to the model designer. The main focus of model design is to establish the problem and its representation, using the ability of randomly optimizing neural networks. Once these are determined, finding the answer is at least not a difficult task. Let's talk about the issue of representation again. Similarly to the example above, if it is a loose and free space program, AI needs to accurately and firmly adhere to a set of action plans and strictly execute them, while humans either strictly execute or freely "express" this program. Strict individuals and strict AI are "symmetrical" and have similar behaviors, so "strict" is an adjective equivalent to dividing human nature and AI nature. Roughly speaking, those who are not so strict are humans, those who are strict are AI, and the human body that "strictly" executes AI is also AI. Returning to the original question, whether general artificial intelligence can be implemented can be understood as a question of whether humans can be constructed using symbols. The equivalent symbol used for this question is "whether it can be represented by symbols". My point of view is that "whether it can be represented by symbols" and "strict semantic clarity of the program" correspond in the usual sense, but we must discuss the implementation of general artificial intelligence under the condition that these two equivalent symbols are asymmetric (i.e. they are not the same thing and must be divided into two independent dimensions). Returning to the beginning, I believe that the implementation problem of general artificial intelligence is similar to the question of whether it is possible to represent a large and complex system with symbols. This is actually a more meaningful and difficult problem. Considering the large and complex systems of the human body, economy, and society together, we will find that the world is an almost infinite dimensional complex system. Even looking at some images involves a lot of renormalization transformations, neural network stimuli, enzyme signal reactions, etc., and the ability to see images itself also involves the technical and cultural foundations, commercial marketing displays, and so on that support this behavior. Since the question asks how to implement general artificial intelligence, here we will also understand the implementation plan of general artificial intelligence through the complex system path. This example is a way to pull the problem apart. What I actually want to state here is that disciplinary division is not conducive to dealing with large and complex systems. Even a very daily behavior involves "countless fundamental factors". My irony about this is that such complexity should not have happened, humans should not have emerged, and the universe is no exception. Why did these things happen? Why there are countless analytical surfaces that are the result of the gradual refinement of disciplinary divisions, rather than the cause. One example is the System Integration Seminar Hall method proposed by Mr. Qian Xuesen in 1991, which is a method for dealing with large and complex systems from countless analytical perspectives. This method requires the gathering of a wide range of experts, scientific argumentation, and some reasonable procedures to determine the action plan. This can be considered as the "conjugate" brother of early "expert systems" and current "causal tools+common sense libraries" in the engineering field, and is an ultimate body method in this direction. Since it involves countless analytical surfaces, it really provides most of them one by one. However, I still have reservations about this type of method. This type of method is not a purely symbolic representation theory, a significant portion of which is represented by humans. Truly effective, pure symbolic theory should be constructed based on first principles rather than large-scale engineering methods, which are only applicable for remediation before the former is fully developed. Next, I will let go of myself~ Building large-scale complex systems cannot start modeling from individual analytical surfaces and their relationships (simply following the old path mentioned above). In my opinion, all large-scale complex systems are just cross-sections of the world, almost all of which are "critical zone phenomena" in the interactions of certain large-scale structures. So to make a move, we need to model the entire world and then search for some meaningful cross-sections from the infinite world. The existence of the world should be independent of other worlds. For example, writing down a bunch of symbols or some meaningful transformations now cannot prove or falsify whether they are the world or not using symbols alone. Or, it is meaningless to say, "Proving that it is the world is also meaningless. It may be true, but it can also be false, or both." You may think I am talking nonsense, but I am actually doing it. Subjectively, I want to talk nonsense because interesting things are in this nonsense. The reason for talking nonsense is that not talking nonsense is meaningless, and talking nonsense is also meaningless. The two have symmetry in the question of meaninglessness. No matter what transformation is designed to construct such a world, that world is free and independent of our model designers. Intelligence lies in that world, and we are symmetrical. In this way, through the transformation of symmetry, we have re understood the question of how general intelligence possesses intelligence. Nonsense "can be constructed and is very simple. For example, if you write a model in C language, how can you talk nonsense? Do an action: Replace programming keywords or variable names used in the model with some other symbol. Once such an action is taken, there is a high probability of grammar errors or runtime errors in the nonsense, a low probability of logical errors in the nonsense, and a very small probability that the output is not correct. The question of the probability involved is discussed with the Cai Ting constant, which cannot be calculated using a Turing machine, but still has an extremely low approximation. The further issue is how to use nonsense construction methods to construct large and complex systems. In this regard, I can say that it is just the beginning without a complete action plan. But I think it is still meaningful because its statement can break some deep-rooted habits. G ö del once talked about the prejudice of the times that the mind is equivalent to the brain (brain computer equivalence), and that artificial intelligence may exist, but cannot be found and proven (proof is a formalistic method, and search based on proof is also necessary). He believes that the power of the mind (human) is infinite, but formalistic methods have inherent limitations. As a follower of G ö del, I seem to see a metaphor that the power of human beings is infinite, not because the thought symbol models outlined by human hands are so powerful and universal, but because humans are a part of the world, and the world itself is infinite! Chatting is over