mahajan@fornax.UUCP (Sanjeev Mahajan) (06/04/90)
After reading the articles on ai by Searle and Churchlands, I sent the following letter to the Scientific American. I generally do not read this newsgroup, so if you have any comments (or if you think that the topic has been beaten to death, and that I have nothing new to say, you can flame away to your heart's content), please e-mail. Anyway, here it goes:- The Editor, Scientific American. Sir, The debate about Artificial Intelligence in the January issue of the Scientific American has left me puzzled. If the discussion is supposed to be on a metaphysical level, then why does a Journal which primarily deals with Scientific issues and developments accept such a paper? However if the discussion is supposed to be scientific, then I have to take issue with it. The questions such as 'Can machines think' or 'Can syntax by itself be constitutive of semantics' are not scientific questions, for the simple reason that 'thinking' and 'semantics' are not scientifically well defined terms. Hence in the domain of Science, these questions are without any meaning. When a Physicist comes up with say, a theory of gravitation, (s)he does not ask whether gravity really exists (whatever that may mean), that is, whether the physical reality actually has a force of gravity somehow embedded in it (whatever that may mean). (S)he is happy if their theory matches or is at least consistent with all the observed 'facts'. Similarly when a Mathematician develops a theory of ordinals, and using this theory, (s)he defines natural numbers to be certain special kind of ordinals in this theory, (s)he does not ask if these 'natural numbers' are 'real' natural numbers or not. (S)he is happy if they satisfy standard Peano's axioms. So in this sense, all scientists are either implicitly or explicitly behaviorists whom Searle for some curious reason lambasts. It is almost axiomatic that all scientific theories are in some sense models of the physical reality (whatever that may mean). A scientist observes a wide array of seemingly disparate facts, and tries to develop a theory that will explain these facts through it, and moreover predicts (using is their theory) certain other 'interesting' behavior of the reality. If this behavior is actually observed in reality, then the theory is in a certain sense, successful, and more its predictions match the observations, the better it is. Of course, the above scenario is a bit simplistic, as a single observation that is inconsistent with the theory may or may not be a disaster for the theory. A whole array of observations which cannot either be explained by the theory or are inconsistent with it has to accrue, before there is a general feeling in the scientific community that the theory either needs to be radically modified or a better theory is needed to supersede it (there are other problems with the scenario that I presented, but I will not delve into them). The point is that scientists do not debate whether their theories are actual reflections of the reality. They are happy with those theories which MODEL (and that is the keyword here) the reality upto their satisfaction, given the current state of observational/experimental techniques. To the extent the human intelligence (whatever this may mean) can be formalized, I see no problems with it being 'modelled' by a computer, at least theoretically. This much should be clear to anyone who has given any thought to this whole issue. So where DOES the problem arise? The problem arises only in those domains where our intuitive notion of intelligence cannot/has not been formalized. But then thes problem of whether 'machines can think' is a purely metaphysical question in the same spirit as the question of whether physical; reality behaves according to the physical theories (present or future). The physicists and mathematicians have come a long way freeing themselves of metaphysical bondage (the notion of absolute time and space, that is, time and space existing independent of clocks and measuring rods has been given up in Physics. Mathematicians have realized the futility of debate over the existence of real and potential infinities). It is high time Artificial Intelligence free itself from metaphysical speculations, and concentrate on developing good testable theories of human intelligence, if it is to progress scientifically.