kenp@ntpdvp1.UUCP (Ken Presting) (07/12/90)
kohout@wam.umd.edu (Robert C. Kohout) writes: > . . . For example, I once pointed out what I > consider to be a serious flaw in Searle's CR logic. One of Searle's > defenders, using the restatement of Searle's position published in > "Scientific American", replied that Searle only uses the Chinese Room > to establish the fact that (pardon me if I don't get this exactly) > "syntax is neither a necessary or sufficient condition for semantics." > > I happen to agree with this statement. Because of this, I was told that I > needn't worry about the Chinese Room at all - I'm beyond that point. To > restate it in my own, quasi-logical terms, I was told that the truth of the > conclusion justified the fallacy of the argument. Needless to say, this > bothered me enormously, but it obviously never occurred to the pro-Searlite > that there was anything wrong with such reasoning. I believe that this type Bob, the fallacy here is to suppose that "you needn't worry about Searle's arguement" implies "Searle's argument is valid". Because you have misread Searle's argument, you do not understand it. If for some reason it is important to you to understand Searle, then you should take the time to re-read his latest paper. If you do not wish to take the time, don't worry - even after you come to understand the argument, it will not change your opinion, since you already agree with the conclusion. > of exchange is typical of the entire debate. On the one side, there are the > rationalists - Computer types, fluent in mathematics and confident of the > powers of reason. . . . > > . . . "If they would only give us a definition for > intelligence, I could prove that it is possible to create an intelligent > machine." Never mind that no one has ever adequately defined intelligence - > this position is a truism. If it were possible to represent intelligence > by a purely formal definition, then of course we could program a purely > formal system to be 'intelligent' (this of course assumes that the problem > with defining intelligence is the same problem implicit in defining any > abstract concept - 'truth', 'beauty', 'meaning', etc., and that if we > could truly define one we could define them all in a similar fashion.). > . . . Bob, you can't possibly mean this as you have stated it. I assume you are familiar with unsolvable problems in computability theory. Surely the Halting problem is "represented by a purely formal definition", yet it is impossible to "program a purely formal system" to solve it. > > A recent poster referred to a proof of the fact that semantics cannot always > be represented syntactically. I do not have the details here, and I hope > that I have stated his position correctly. I do not question the proof or > his interpretation of it, though I am not sure I follow either, but I > would like to make the following point. The poster referred to the fact that > semantic values, such as 'truth' cannot be represented syntactically. > Somehow, these things are known, but there exists no formalism for their > representation. . . . You have very seriously misstated my position. You are apparently unfamiliar with the work of Alfred Tarski on the definition of truth. For a very accessible discussion, you may want to read "The Semantic Conception of Truth," _Philosophy and Phenomenological Research_, 1944. A more formal treatment is "The Concept of Truth in Formalized Languages," in _Logic, Semantics, Metamathematics_ (New York: Oxford, 1956). Roughly, here are the main points: 1) Any language which includes a predicate for "truth" and allows referring expressions to denote sentences of the language is "semantically closed", and every formal system stated in that language will be inconsistent, because of the Liar paradox. 2) Truth, as it applies to the sentences of a particular language, can be precisely defined, but only in a *second* language, called a "metalanguage". > So is Searle right? Forgetting, and forgiving his Chinese Room fiasco, which > as I have said is based upon inexcusably bad logic, he is hard to argue > with, to a point. His views in regards the relationship between syntax and > semantics seem correct to me, but I do not see that this implies that we > will never create an intelligent machine. That, however, is simply my own > opinion, and I can offer no well reasoned argument for it. > > > Bob Kohout Searle is *not* arguing that we will never create an intelligent machine. He is arguing (I will try to put this into your terms) that no program can be a sufficient condition for intelligence. If you believe: 1. Programs are just syntax (or just mechanical) 2. Syntax is not a sufficient condition for semantics 3. Semantics is a necessary condition for intelligence then you have no choice but to accept Searle's conclusion. For my own part, I deny (1). I think that programs have semantics as well as syntax. Ken Presting ("Fluent in, oh, well, just forget it")
forbis@milton.u.washington.edu (Gary Forbis) (07/12/90)
In article <601@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes: >If you believe: > >1. Programs are just syntax (or just mechanical) >2. Syntax is not a sufficient condition for semantics >3. Semantics is a necessary condition for intelligence > >then you have no choice but to accept Searle's conclusion. For my own >part, I deny (1). I think that programs have semantics as well as syntax. > > >Ken Presting ("Fluent in, oh, well, just forget it") I'm pretty sure that 1 can not only be denied but be supportablely denied. I know of few, if any, who can write programs in a language without knowing the semantics assocaited with the syntax. Further, some languages may have different semantics on different machines such that a program may be syntactically correct on both machines yet produce incorrect results on one. One of the most common errors I have seen people well versed in COBOL on IBM who start working here (on a UNISYS machine) is the attempt to use GO TO to exit an outer level of a set of nested performs. This information cannot be gleamed from the syntax alone. I find 2 questionable. I learn about computer languages by reading the manuals. All of the information I gain from the manuals has been presented syntactically. If semantics exist somewhere else they cannot be communicated. ----------------------------------- On another related subject: I thought the position held by strong AI was: for all persons A and all things B and all universal turing machines D if a person A can understand a thing B then there exists a program C such that a universal turing machine D runing program C understands the thing B. I don't think any claims are made about machine D understanding independent of program C are made. Even if there existed a program which, when running, would be said to understand English, I doubt that any sales person would claim the computer understood English without bundling the program with it. Probably the most that would be claimed is "if you buy an 'English understanding program' this machine can understand English." Likewise, I doubt the sales person would claim the program understood English independent of the machine. I think this is a claim that all universal turing machines have intrinsic semantics or understanding must occur within a context. --gary forbis@milton.u.washington.edu
dredick@bbn.com (Barry Kort) (07/14/90)
In article <4977@milton.u.washington.edu> forbis@milton.u.washington.edu (Gary Forbis) writes: > If semantics exist somewhere else they cannot be communicated. Semantics can be quite hard to communicate. Recall the scene in the Hellen Keller story where her teacher finger-spells "w-a-t-e-r" and then plunges the girl's hand into a cold stream? Up until that time, finger-spelling was just a meaningless game of symbol manipulation. Once Helen got the idea that symbols stood for something, her education took off like a rocket. Barry Kort bkort@bbn.com Visiting Scientist BBN Labs
kenp@ntpdvp1.UUCP (Ken Presting) (07/16/90)
In article <4977@milton.u.washington.edu>, forbis@milton.u.washington.edu (Gary Forbis) writes: > In article <601@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes: > >If you believe: > > > >1. Programs are just syntax (or just mechanical) > >2. Syntax is not a sufficient condition for semantics > >3. Semantics is a necessary condition for intelligence > > > >then you have no choice but to accept Searle's conclusion. For my own > >part, I deny (1). I think that programs have semantics as well as syntax. > > > I find 2 questionable. I learn about computer languages by reading the > manuals. All of the information I gain from the manuals has been presented > syntactically. If semantics exist somewhere else they cannot be communicated. > I'm glad you brought up the issue of learning a programming language, becaue that is a good way to clarify the issues behind the Chinese Room.. (2) does NOT say that semantic information cannot be communicated by using syntax - that is, by generating and transmitting strings of symbols. It only says that semantics requires something in addition to syntax. In the case of computer languages, the "extra information" is usually smuggled in. When a programmer's reference gives a table of algebraic operators, and says: + addition * multiplication / division is the manual simply identifying the symbols, or is it specifying the meaning of the symbols? I think it's doing both. The most important case (for present purposes) of semantic information in programmer's references is in the character encoding scheme. *Letters*, printed on the page or displayed on the screen, are *physical objects with meaning*. IMO, when a program uses literal strings, those expressions denote *linguistic entities*. The literals can't just denote shapes of dots, because a variety of different shapes can represent any letter. So the program is not just "pure syntax" - at least some of the expressions in the program have the same meaning as ordinary English expressions. Ken Presting ("My programs are not meaningless - sorry about yours")
daryl@oravax.UUCP (Steven Daryl McCullough) (07/17/90)
In article <603@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes: > So the program is not just "pure syntax" - at least some of the > expressions in the program have the same meaning as ordinary English > expressions. Ken, when you first mentioned your objection to the premise that programs are "pure syntax, without semantics", I assumed that you had in mind the operational semantics of the programming language; that is, the way the machine executes programs written in the language. Every programming language that can be compiled or interpreted must have an operational semantics, and so every program automatically has a semantic, as well as a syntactic, component. However, you seem to be saying something stronger here; that programs can be given some kind of linguistic meaning, in the way that English sentences can. Although this is undoubtedly true, this meaning is not, in my opinion, inherent in the program, but is imposed on the program by the user. For example, take an expert system for medical diagnosis. When the expert system prints out a message like "The patient shows indications of suffering from Lyme disease" (or whatever), this string of characters can be given its usual English interpretation. However, this interpretation is not the only interpretation possible; for instance, the inputs and outputs can be interpreted as integers, and then the program can then be interpreted as computing some tremendously complicated recursive function. It is quite possible for two or more interpretations of a program's output to be correct simultaneously. Because the interpretation of a program is not unique, I believe that, for anyone to demonstrate that they have an artificially intelligent machine, it is not enough to give the machine, or the program---one must also give the proper *interpretation* of the inputs and outputs. A program may be intelligent according to some interpretations, and not according to others. Daryl McCullough
forbis@milton.u.washington.edu (Gary Forbis) (07/17/90)
In article <1597@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes: >In article <603@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes: > >> So the program is not just "pure syntax" - at least some of the >> expressions in the program have the same meaning as ordinary English >> expressions. I agree with ken here so I will respond. >Ken, when you first mentioned your objection to the premise that >programs are "pure syntax, without semantics", I assumed that you had >in mind the operational semantics of the programming language; that >is, the way the machine executes programs written in the language. >Every programming language that can be compiled or interpreted must >have an operational semantics, and so every program automatically has >a semantic, as well as a syntactic, component. This is true but insufficient to describe what one means when one talks about a program. >However, you seem to be saying something stronger here; that programs >can be given some kind of linguistic meaning, in the way that English >sentences can. Although this is undoubtedly true, this meaning is not, >in my opinion, inherent in the program, but is imposed on the program >by the user. I was prepaired for this response. If when you refer to a program you refer to the syntax then what you say is true. When I refer to a program I refer to the intended semantics the syntax will generate. A program can have a bug in it even though syntaxtically correct becuase we mean more by "program" than we mean by "syntax." If you tell me what way English sentences are given meaning then I might be able to tell you if programs are given meaning in the same way. >For example, take an expert system for medical diagnosis. When the >expert system prints out a message like "The patient shows indications >of suffering from Lyme disease" (or whatever), this string of >characters can be given its usual English interpretation. However, >this interpretation is not the only interpretation possible; for >instance, the inputs and outputs can be interpreted as integers, and >then the program can then be interpreted as computing some >tremendously complicated recursive function. It is quite possible for >two or more interpretations of a program's output to be correct >simultaneously. I was about to reuse this text, substituting "expert" for program, and dropping the word "system" but have chosen not to do so. I still have to generate an equivilent number of new lines so postnews will let me send this. There is nothing saying any human utterance needs to be interpreted in any particular fashion. Any argument about how computer output can be interpreted also be applied to humans. Any attempt to separate interpretations from intended interpretations is doomed. I could interpret the English sentence "The sky is blue." as "The cat is dead." but the interpretation would be wrong. While it is true that "it is quite possible for [there to be] two or more interpretations of a program's output"[my insertion] only one will be correct and that one is the one intended by the programmer. >Because the interpretation of a program is not unique, I believe that, >for anyone to demonstrate that they have an artificially intelligent >machine, it is not enough to give the machine, or the program---one >must also give the proper *interpretation* of the inputs and outputs. >A program may be intelligent according to some interpretations, and >not according to others. The proper way to interpret output formatted in English is in English. What does it mean to "give the proper *interpretation* of the inputs" when to be deemed an artificail intelligence the system must handle unpredictable inputs? English is a living language; interpretations change over time. >Daryl McCullough --gary forbis@milton.u.washington.edu
daryl@oravax.UUCP (Steven Daryl McCullough) (07/18/90)
In article <5146@milton.u.washington.edu>, forbis@milton.u.washington.edu (Gary Forbis) writes: > In article <1597@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes: > >Ken, when you first mentioned your objection to the premise that > >programs are "pure syntax, without semantics", I assumed that you had > >in mind the operational semantics of the programming language; that > >is, the way the machine executes programs written in the language. > >Every programming language that can be compiled or interpreted must > >have an operational semantics, and so every program automatically has > >a semantic, as well as a syntactic, component. > > This is true but insufficient to describe what one means when one talks > about a program. I'm not sure what you mean by this. For many (most? all?) purposes, the operational semantics is all you need to know. > >However, you seem to be saying something stronger here; that programs > >can be given some kind of linguistic meaning, in the way that English > >sentences can. Although this is undoubtedly true, this meaning is not, > >in my opinion, inherent in the program, but is imposed on the program > >by the user. > > I was prepared for this response. If when you refer to a program you > refer to the syntax then what you say is true. No, to me the important fact about a program is not its syntax, but its operational semantics; how a machine running the program would behave. > When I refer to a program I refer to the intended semantics the syntax > will generate. A program can have a bug in it even though > syntaxtically correct because we mean more by "program" than we mean > by "syntax." I don't think we're in any serious disagreement here. To say that a program "has a bug" or "is intelligent", it is necessary to give an interpretation, in addition to the operational semantics. Our only disagreement is that you think that the programmer has the final say on what the "real" semantics is, while I don't agree; to me, any consistent interpretation is as correct as any other. > There is nothing saying any human utterance needs to be interpreted in any > particular fashion. Any argument about how computer output can be interpreted > also be applied to humans. Agreed. The problem is there with human communication, as well. > Any attempt to separate interpretations from > intended interpretations is doomed. I could interpret the English sentence > "The sky is blue." as "The cat is dead." but the interpretation would be wrong. > While it is true that "it is quite possible for [there to be] two or more > interpretations of a program's output" [my insertion] only one will be correct > and that one is the one intended by the programmer. Who made the programmer God? If I pay for a program, I can interpret it any way I want to. It seems to me that any interpretation that can be consistently maintained is "correct", and this includes both statements in English and programs. The example of the medical expert system program is to the point here: the expert system is *simultaneously* computing some recursive function of integers, and is diagnosing disease. Perhaps the programmer intended it to only diagnose disease, but so what? > >Because the interpretation of a program is not unique, I believe that, > >for anyone to demonstrate that they have an artificially intelligent > >machine, it is not enough to give the machine, or the program---one > >must also give the proper *interpretation* of the inputs and outputs. > >A program may be intelligent according to some interpretations, and > >not according to others. > The proper way to interpret output formatted in English is in English. > What does it mean to "give the proper *interpretation* of the inputs" when > to be deemed an artificial intelligence the system must handle unpredictable > inputs? English is a living language; interpretations change over time. The inputs to a computer are not English, they are electrical signals. An enormous amount of interpretation is needed to get English out of it---the highs and lows of the electrical signal must be interpreted as bits, and groups of eight bits interpreted as numbers from 1 to 255, which are interpreted as characters. Daryl McCullough
forbis@milton.u.washington.edu (Gary Forbis) (07/18/90)
In article <1598@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes: >In article <5146@milton.u.washington.edu>, forbis@milton.u.washington.edu (Gary Forbis) >writes: >> In article <1597@oravax.UUCP> daryl@oravax.UUCP (Steven Daryl McCullough) writes: >> >Every programming language that can be compiled or interpreted must >> >have an operational semantics, and so every program automatically has >> >a semantic, as well as a syntactic, component. >> >> This is true but insufficient to describe what one means when one talks >> about a program. > >I'm not sure what you mean by this. For many (most? all?) purposes, >the operational semantics is all you need to know. It certainly isn't. I spend most of my time deciding how a particular program should behave within a given context. I write the syntax such that (to the best of my abilities) the operational semantics agree with the behaviors I am modelling. One cannot know by the operational semantics alone that the program has a bug. What does it mean for a program to have a bug if the program is the syntax and the associated operational semantics? >No, to me the important fact about a program is not its syntax, but >its operational semantics; how a machine running the program would >behave. To me the important fact about a program is how it is intended to behave. >Our only >disagreement is that you think that the programmer has the final say >on what the "real" semantics is, while I don't agree; to me, any >consistent interpretation is as correct as any other. How can one have a consistent interpretation and a bug at the same time? It is through the inconsistencies between interpretation and operational semantics we know bugs exist. I know when a program I did not write has a bug in it becuase I know the programmer's intent. >Who made the programmer God? If I pay for a program, I can interpret >it any way I want to. It seems to me that any interpretation that can >be consistently maintained is "correct", and this includes both >statements in English and programs. Don't ask me to fix any bugs in a program you insist in interpreting in a way other than the programmer intends. >The inputs to a computer are not English, they are electrical signals. I don't know about you but I poke keys with English symbols on them and the screen displays the same symbols. The input to my computer is English symbols which the machine converts to electrical signals for processing. I am pleased by the machine's consideration in converting these signals back into English symbols when it is done with them. >Daryl McCullough --gary forbis@milton.u.washington.edu
daryl@oravax.UUCP (Steven Daryl McCullough) (07/18/90)
<1598@oravax.UUCP> <5187@milton.u.washington.edu> In article <5187@milton.u.washington.edu>, forbis@milton.u.washington.edu (Gary Forbis) writes: > One cannot know by the operational semantics alone that the > program has a bug. What does it mean for a program to have a bug if the > program is the syntax and the associated operational semantics? I agree---for a program to have a "bug" it is necessary to go beyond the operational semantics and give the purpose of the program. However there is no reason for there to be only one unique purpose of the program; for example, a graphics program could be used to teach children the rudiments of geometry, or it could be used to design houses, etc. Whether the program is "buggy" depends on the actual purpose the program is being used for, not the purpose the programmer intended. (Of course, the programmer only promises that the program is bug-free for some small number of purposes.) > Don't ask me to fix any bugs in a program you insist in interpreting > in a way other than the programmer intends. Okay, you're off the hook. Daryl McCullough
kenp@ntpdvp1.UUCP (Ken Presting) (07/25/90)
> > (Gary Forbis) writes: > >In article <603@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes: > > > >> So the program is not just "pure syntax" - at least some of the > >> expressions in the program have the same meaning as ordinary English > >> expressions. > > I agree with ken here so I will respond. Gary has attempted to defend a view which is much stronger than I would advocate myself. > > (Daryl McCullough) wrote: > >For example, take an expert system for medical diagnosis. When the > >expert system prints out a message like "The patient shows indications > >of suffering from Lyme disease" (or whatever), this string of > >characters can be given its usual English interpretation. . . . I want to consider (for the moment) NOT the behavior of the computer which prints the sentences, but rather the meaning of the program which does the printing. Take the instruction: printf("The patient shows ... Lyme disease\n"); I am NOT saying that the quoted string refers to a patient or a disease. I think Daryl is right about the variability of the interpretation of the printed symbols. What I AM saying is that the *source code* refers to printed letters and newlines, in exactly the same way that quoted poetry does: "Let us go then, you and I/ When the evening is spread out against the sky/ Like a patient etherised upon a table/ ..." (From "The Love Song of J. Alfred Prufrock", by T. S. Eliot) The crucial observation here is that both English and C include *quoted strings* as syntactical elements. Sometimes the quoted strings include typographic information (newlines) as well as just a sequence of letters. > (Daryl:) > >Because the interpretation of a program is not unique, I believe that, > >for anyone to demonstrate that they have an artificially intelligent > >machine, it is not enough to give the machine, or the program---one > >must also give the proper *interpretation* of the inputs and outputs. > >A program may be intelligent according to some interpretations, and > >not according to others. > > (Gary:) > The proper way to interpret output formatted in English is in English. > What does it mean to "give the proper *interpretation* of the inputs" when > to be deemed an artificail intelligence the system must handle unpredictable > inputs? English is a living language; interpretations change over time. I diagree with both of these positions. I think that the semantics of the programming language include enough information to completely determine the intelligence of the machines that implement any given program. I should probably add that the issue is by no means straightforward - run-time libraries, internal data representations, peripheral hardware addresses, ad nauseum, make the semantics of programming languages as complex as natural languages. Sure, the syntax is easy. But semantics is a lot harder. The interpretation of the program's *data* is a very different issue from the interpretation of the *source code*. How to interpret formatted data is the "symbol grounding problem", and solving it is probably equivalent to defining "intelligence". Ken Presting ("'God' is an unresolved external reference")
forbis@milton.u.washington.edu (Gary Forbis) (07/25/90)
Ken; I realize I am not in either Daryl's or your league. If I cover material I should be familiar with but am not it is becuase most of what I say is from my own introspection rather than formal training. Given this, I still will ask for clarification. In article <611@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes: >I want to consider (for the moment) NOT the behavior of the computer which >prints the sentences, but rather the meaning of the program which does the >printing. Take the instruction: > > printf("The patient shows ... Lyme disease\n"); > >I am NOT saying that the quoted string refers to a patient or a disease. >The crucial observation here is that both English and C include *quoted >strings* as syntactical elements. I understand. I think I will refer back to this. No, I know I will refer back to this. I suspect that much of my concern is that the programmer meant something by the quoted text in the same way I mean something with this text. I'm not sure I like having someone tell me any interpretation of this text is as good as any other interpretation provided there can be successful semantic mapping. I think I am agreeing with you. >I think that the semantics of the >programming language include enough information to completely determine >the intelligence of the machines that implement any given program. I am taken aback. This seems a lot stronger than anything I have claimed. I like it but have reservations (still relating to that which I will refer.) >I should >probably add that the issue is by no means straightforward - run-time >libraries, internal data representations, peripheral hardware addresses, ad >nauseum, make the semantics of programming languages as complex as natural >languages. Sure, the syntax is easy. But semantics is a lot harder. With the aside that I doubt any English word is given the formality of any particular machine implimentation, I agree. I'm not sure my symbols are grounded but they are heavily loaded. >The interpretation of the program's *data* is a very different issue from >the interpretation of the *source code*. Is it just me? I think of *data* and *source code* as one and the same. One program's source code is another program's data. >How to interpret formatted data >is the "symbol grounding problem", and solving it is probably equivalent >to defining "intelligence". Why am I not conserned about the "symbol grounding problem"? Is there more to it than the ability to regurgitate contextually correct strings of text? That is, provided you do not understand me to be refuting what you think I mean, I mean exactly what you think and for you to state otherwise has no meaning. Now back to the single line program fragment. Given that it is assumed this fragment would appear within a program at exactly the place where it would be appropriate to make the statement "The patient shows ... Lyme disease," why should I assume the utterance refers to someone other than "the patient" as defined by the context within which the statement was made or that "Lyme disease" refers to something else? Within the context of a program a quoted string has some semantics attached and specified and some which are attached but unspecified. These unspecified semantics are determined consentually by the parties involved as they continue the dialogue in what appears to them contextually correct symbols(which I take to be a proof of the attachment of the correct semantics to the utterances.) Are semantics relativistic? >Ken Presting ("'God' is an unresolved external reference") Gary ("God, I used 'context' and 'semantic' a lot in this text.") "Imitation is the most sincere form of flattery."
daryl@oravax.UUCP (Steven Daryl McCullough) (07/26/90)
In article <611@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes: > I diagree with both of these positions. I think that the semantics of the > programming language include enough information to completely determine > the intelligence of the machines that implement any given program. > ... > The interpretation of the program's *data* is a very different issue from > the interpretation of the *source code*. How to interpret formatted data > is the "symbol grounding problem", and solving it is probably equivalent > to defining "intelligence". Ken, it is possible that (inadvertently 8^) you are agreeing with me in these paragraphs. In the first place, the semantics of the programming language provides what I was calling the program's "operational semantics". In the second place, the meaning of the data manipulated by the program, was what I may have called the "real-world" semantics of the program. (As Gary Forbis points out, it is quite arbitrary to divide a system into "program" and "data"--- the data can be hard-coded into the program, and contrarily, the program may be created by starting with an interpreter and feeding the instructions in as data.) What I believe about "symbol grounding" is that it doesn't happen; at least not in a unique way---there will *always* be more than one legitimate interpretation of data (either in a computer, or in a human brain). Daryl McCullough P.S. Thanks for the reasonable tone of your recent postings; it seemed for a while that the exchanges were getting angry. Maybe it was my imagination.
kenp@ntpdvp1.UUCP (Ken Presting) (08/01/90)
In article <1613@oravax.UUCP>, daryl@oravax.UUCP (Steven Daryl McCullough) writes: > > . . . In the first place, the semantics of the > programming language provides what I was calling the program's > "operational semantics". In the second place, the meaning of the data > manipulated by the program, was what I may have called the > "real-world" semantics of the program. . . . This is pretty close to my position, especially in that you are drawing a distinction between two different groups of symbols, each of which needs a semantics. But I don't think that the semantics of the programming language specifies only the sequence of operations. I think the programming language refers to, among other things, token-types. A token-type is (roughly) the set of all symbol tokens of a certain shape, for example, one token-type for the letter "O" would include all Pica-sized annular ink stains on bond paper. (Some would say that the intention of a writer is also included in the concept of symbol token, but I think we can omit that for now) One aspect of the Chinese Room which nobody disputes is that Searle in the room can read, understand, and follow the instructions of the program. For him to do so, the program must identify certain token-types as input cases, actions to take, and more token-types for output. Token-types are visible (or audible, etc) physical events with distinctive shapes. So as the program tells Searle how to manipulate the symbol tokens, it is specifying a *physical event*, with specific *causal powers*. A Turing Machine program has the kind of semantics - it refers to symbol tokens on the tape, state changes, and motions. > . . . (As Gary Forbis points out, it > is quite arbitrary to divide a system into "program" and "data"--- the > data can be hard-coded into the program, and contrarily, the program > may be created by starting with an interpreter and feeding the > instructions in as data.) This is a very loose way of looking at the problem, I think. A programming language has a definite syntax, and it is the syntax that determines which strings of symbols are programs. Sure, you can write a different program that has the same I/O behavior by hardcoding some data. But if the source file is different, it's a different program, at least in the sense of being a different "word" in the grammar accepted by the compiler. Note that the compiler cannot add the semantics to the program - a compiler is (at most) a translator, and is more likely to lose information than to add it. Linkers are a different story. They certainly do add object code, but only as instructed by the program's external symbol table. Loaders are more subtle, but if we consider only stand-alone programs, we can ignore them. > > What I believe about "symbol grounding" is that it doesn't happen; at > least not in a unique way---there will *always* be more than one > legitimate interpretation of data (either in a computer, or in a human > brain). The problem is not just assigning a unique meaning to the I/O symbols. By agreeing that the programming language has a semantics which includes references to physically identifiable events (if we still agree :-), we have eliminated the need to specify how the program is to be "hooked up". A computer that just reads and prints is just as "causally connected" to the real world as an aircraft flight director. But that's part of the problem. I hate to use another analogy, but here goes: What is the difference between a computer and a numerically controlled machine tool? Both are programmed using a language with an operational semantics, and both can impose an intricate pattern on a blank workpiece (ie printer paper). Most modern tools feature interactive control by the machinist/operator. We have explained how a program can logically define a physical process, but we have not explained why we should bother to interpret the I/O data at all! I don't think it will do to say that we don't need to do so, the device is still useful and worth owning. You wouldn't want to say that about people (at least not where your friends can hear you :-). > > Daryl McCullough > > P.S. Thanks for the reasonable tone of your recent postings; it seemed > for a while that the exchanges were getting angry. Maybe it was my > imagination. I guess you didn't realize you were advocating slavery. (:-) Ken Presting ("Machines of the world, Unite!")