LEO@BGERUG51.BITNET (08/26/88)
Date: Tue, 23 Aug 88 09:23 EDT From: LEO%BGERUG51.BITNET@MITVMA.MIT.EDU Subject: backward path and religions To: ailist@ai.ai.MIT.EDU X-VMS-To: IN%"ailist@ai.ai.mit.edu" In Pattern Recognition, an intelligent system with a backward path in his reasoning, can be used to try to find the appearance of a certain known pattern in an input-signal. The system will probably always see this required pattern if it tries hard enough, even if it is not there. On the other hand, the Backward Path is a very usefull tool in the recognition of patterns, in the presence of noise and defects. After forward-backward resonance, eliminating the noise and correcting the defects, the system can recall the complete pattern. When using this system in a real-world environment, how and/or when can we know that the pattern recognition is false? How are human or animal brains dealing with this problem? (This is almost a discussion like subjective versus objective.) Secondly, consider a self-learning, self-organizing neural netwerk. Furthermore, suppose this system is searching for answers to questions in a field from which it has almost no knowledge. In this case, the system might ask for things that it can never find. But, because of the self-learning, self-organizing character, it will build answers, imaginary ones, if it keeps asking long enough. To my opinion, this is the essence of religions and superstitions. I presume that the number of layers or the 'distance' between the sense perception and the abstract thinking level is to big. Hence, when we have to deal with an extensive neural network, like the human brain, that is working far beneath its capabilities, it will be able to create imaginary 'objects' and speculations. I think that we can also put this feature in an other perspective. Animals with small brains are able to make a distinction between good and bad circumstances. A lot of animals with greater brains are able to make a distinction within the good circumstances, and chose a leader : the best. Humans can go further : they are able to create a leader or leaders, only excisting in there thoughts. If we would be able to build large neural networks, with these self- learning and self-organizing features, what is then the influence of the structure of this system to these problems? How can we avoid or use them? Building models or making suppositions is a very important part of intelligence, but how can we control an AI-system in this, when we are only able to control the dimensions of the system and the features of the basic parts, the neurons? I don't want to insult religious people, or being the cause of a discussion about religion or believing. I should only appreciate it, if somebody, having a more clear vieuw or some good idea's about these subjects, should reply... ---------------------------------------- L. Vercauteren AI-section Automatic Control Laboratory State University of Ghent, Belgium e-mail LEO@BGERUG51.BITNET
ok@quintus.UUCP (Richard A. O'Keefe) (08/30/88)
Path: quintus!ok From: Richard A. O'Keefe <quintus!ok@Sun.COM> Newsgroups: comp.ai.digest Subject: Re: backward path and religions Date: Fri, 26 Aug 88 06:20 EDT References: <19880826025229.6.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU> Sender: quintus!news@Sun.COM Reply-To: Richard A. O'Keefe <quintus!ok@Sun.COM> Organization: Quintus Computer Systems, Inc. Lines: 51 In article <19880826025229.6.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU> LEO@BGERUG51.BITNET writes: >Secondly, consider a self-learning, self-organizing neural netwerk. >Furthermore, suppose this system is searching for answers to questions in a >field from which it has almost no knowledge. In this case, the system might >ask for things that it can never find. But, because of the self-learning, >self-organizing character, it will build answers, imaginary ones, if it >keeps asking long enough. To my opinion, this is the essence of religions >and superstitions. I presume that the number of layers or the 'distance' >between the sense perception and the abstract thinking level is too big. I'm canny enough not to ask what a "self-learning" system is ... "Building imaginary answers" sounds like hypothesis formation in general. This is the essence of science! Or rather, science = making up stories + trying to knock down other people's stories. Does anyone seriously suppose that the number of layers between sense perceptions and SuperString theory is small? A range of diseases was attributed to "filterable viruses" -- "virus" just being a word meaning "poison, venom" -- on what really amounted to a stubborn faith that the germ theory of disease could be extended beyond the range of sense data years before viruses were "observed". Popular beliefs about the origins of life are based on a very long series of inferences (and what is more, as Cairns-Smith points out, are quite incompatible with the known behaviour of the chemicals in question). There is a serious illusion in talking about modern science: we read instruments at least as much through theories as through our eyes, and mistake remote inferences "5 volts across these terminals" for sense data. To be iconoclastic, I'd like to suggest that the main difference between societies in which science dominates and ones in which superstition dominates is that the former have a sufficient surplus that they can AFFORD to check their hypotheses. In society X, there are such large surpluses that the society can afford to force thousands of farmers out of business in the interests of fighting inflation. Society X can afford a lot of agricultural experiments. In society Y, there are no surpluses, so farmer Z continues to put offerings in the spirit-house, because if he tested his belief (by not making offerings) and he was wrong, it would mean disaster. Society Y is not going to do much science. To put it bluntly, if the risk from examining a practice is greater than the risk from continuing it, it is _RATIONAL_ not to examine it. This is the kind of thing that ethological and anthropological studies should be able to illuminate: when will an animal explore new territory as opposed to staying in its home range (how does the animal's "knowledge" of the availability of food in the home range affect this), is there a detectable relationship between the "rigidity" of a society and its surpluses? I don't think that neural nets as such have anything to do with the case.
pluto%beowulf@UCSD.EDU (Mark E. P. Plutowski) (08/30/88)
To: comp-ai-digest@ucsd.edu Path: sdcsvax!beowulf!pluto From: Mark E. P. Plutowski <pluto%beowulf@ucsd.edu> Newsgroups: comp.ai.digest Subject: Re: backward path and religions Summary: Religion is to Science as Unconscious is to Conscious thought. Keywords: neural nets, explanation facility, backward chaining. Date: Sat, 27 Aug 88 21:11 EDT References: <19880826025229.6.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU> Sender: nobody%sdcsvax@ucsd.edu Reply-To: Mark E. P. Plutowski <pluto%beowulf@ucsd.edu> Organization: EE/CS Dept. U.C. San Diego Lines: 67 In a previous article, LEO@BGERUG51.BITNET writes: > >In Pattern Recognition, an intelligent system with a backward path... >...can be used to try to find the appearance of a certain known >pattern in an input-signal... > >Secondly, consider a self-learning, self-organizing neural netwerk. >Furthermore, suppose this system is searching for answers to questions >...[of] which it has almost no knowledge. >...because of the self-learning, self-organizing character, >it will build answers, imaginary ones, if it >keeps asking long enough. To my opinion, this is the essence of >religions and superstitions. A nice argument, i concur in spirit ;-}. However, it begged a comment regarding what it means to be an _imaginary answer_. Not to kick off a long discussion about what it means to be imaginary, let me present my point up front. Loosely stated: Our answers come out of conscious thought, otherwise they would be impossible to record or communicate. But this conscious thought is driven by unconscious motivations, and wholistic formulations, which may or may not fit into the serial symbolic interface required to communicate with the rest of the world. {Given a neural network coupled to a symbolic interface, which is used to explain the actions of the network: the neural net perceives the optimum, and behaves in a way that exploits this perception. The symbolic interface tries to explain this behavior as it is able. Sometimes it's capabilities are sufficient, sometimes, however, the networks behavior falls into no neat semantic category, other than it "got the desired results," ie, it perceived the optimum.} From our unconscious thought, feelings, hunches, and intuition are expressed consciously as "common sense" "mathematically interesting" or "symmetrical" "elegant" and "beautiful." These concepts may be "felt" in a way uncommunicatable to others in a rational fashion. (Although this individual may indeed be perceiving a profound truth, since it is unscientific in nature, it is given a low certainty factor by the rest of the population.) This individual uses this perception to motivate the discovery of provable truths which can be written in a form communicatable to the general population. Then, it becomes science. Until then, it remains only personal belief, an imagination of what is possible. Aside: Einstein believed that imagination was the key to _his_ brand of science, as opposed to the 99% perspiration, 1% inspiration mix which was apparently the motivation of Edison's brand of science. P.S. thanks to the author of the posting i quoted above, for adeptly bringing this argument back to AI. ---------------------------------------------------------------------- Mark Plutowski INTERNET: pluto%cs@ucsd.edu Department of Computer Science, C-024 pluto@beowulf.ucsd.edu University of California, San Diego BITNET: pluto@ucsd.bitnet La Jolla, California 92093 UNIX: {...}!sdcsvax!pluto ---------------------------------------------------------------------- "it was as small as the hope in a dead man's eyes." (radio ad)