harnad@mind.UUCP (02/23/87)
DAVIS%EMBL.BITNET@wiscvm.wisc.edu wrote on mod.ai: > Sure - there is no advantage in a conscious system doing what can > be done unconciously. BUT, and its a big but, if the system that > gets to do trick X first *just happens* to be conscious, then all > future systems evolving from that one will also be conscious. I couldn't ask for a stronger concession to methodological epiphenomenalism. > In fact, it may not even be an accident - when you > consider the sort of complexity involved in building a `turing- > indistinguishable' automaton, versus the slow, steady progress possible > with an evolving, concious system, it may very well be that the ONLY > reason for the existence of conscious systems is that they are > *easier* to build within an evolutionary, biochemical context. Now it sounds like you're taking it back. > Hence, we have no real reason to suppose that there is a 'why' to be > answered. You'll have to make up your mind. But as long as anyone proposes a conscious interpretation of a functional "how" story, I must challenge the interpretation by asking a functional "why?", and Occam's razor will be cutting with me, not with my opponent. It is not the existence of consciousness that's at issue (of course it exists) but its functional explanation and the criteria for inferring that it is present in cases other than one's own. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet
marty1@houem.UUCP (02/24/87)
I'm sorry if it's necessary to know the technical terminology of philosophy to participate in discussions of engineering and artifice. I admit my ignorance and proceed to make my point anyway. In article <552@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes (I condense and paraphrase): > DAVIS%EMBL.BITNET@wiscvm.wisc.edu wrote on mod.ai: > > ... if the system that [does] X first [is] conscious, then all > > future systems evolving from that one will also be conscious. > I couldn't ask for a stronger concession to methodological epiphenomenalism. In 25 words or less, what's methodological epiphenomenalism? > > In fact ... [maybe] conscious systems ... are > > *easier* to build within an evolutionary, biochemical context. > Now it sounds like you're taking it back. I think DAVIS is just suggesting an alternative hypothesis. > > Hence, we have no real reason to suppose that there is a 'why' to be > > answered. Then why did DAVIS propose that "easier" is "why"? Let me propose another "why." Not long ago I suggested that a simple unix(tm) command like "make" could be made to know when it was acting, and when it was merely contemplating action. It would then not only appear to be conscious, but would thereby work more effectively. Let us go further. IBM's infamous PL/I Checkout Compiler has many states, in each of which it can accept only a limited set of commands and will do only a limited set of things. As user, you can ask it what state it's in, and it can even tell you what it can do in that state, though it doesn't know what it could do in other states. But you can ask it what it's doing now, and it will tell you. It answers questions as though it were very stupid, but dimly conscious. Of course, the "actuality" of consciousness is private, in that the question of whether X "is conscious" can be answered only by X. An observer of X can only tell whether X "acts as though it were conscious." If the observer empathizes with X, that is, observes him/her/it-self as the "same type of being" as X, the "appearance" of consciousness becomes evidence of "actuality." I propose that we pay less attention to whether we are the "same type of being" as X and more attention to the (inter)action. If expert systems can be written to tell you an answer, and also tell you how they got the answer, it should not be hard to write a system like the Checkout Compiler, but with a little more knowledge of its own capabilities. That would make it a lot easier for an inexpert user to interact with it. Consider also the infamous "Eliza" as a system that is not conscious. At first it appears to interact much as a psychotherapist would, but you can test it by pulling its leg, and it won't know you're pulling its leg; a therapist would notice and shift to another state. You can also make a therapist speak to you non-professionally by a verbal time-out signal, and then go back to professional mode. But Eliza has only one functional state, and hence neither need nor capacity for consciousness. Thus, the evolutionary advantage of consciousness in primates (the actuality as well as the appearance) is that it facilitates such social interactions as communication and cooperation. The advantage of building consciousness into computer programs (now I refer to the appearance, since I can't empathize with a computer program) is the same: to facilitate communication and cooperation. I propose that we ignore the philosophy and get on with the engineering. We already know how to build systems that interact as though they were conscious. Even if a criterion could be devised to tell whether X is "actually" conscious, not just "seemingly" conscious, we don't need it to build functionally conscious systems. Marty M. B. Brilliant (201)-949-1858 AT&T-BL HO 3D-520 houem!marty1
harnad@mind.UUCP (02/25/87)
M. B. Brilliant (marty1@houem.UUCP) of AT&T-BL HO 3D-520 asks: > In 25 words or less, what's methodological epiphenomenalism? Your own reply (less a few words) defines it well enough: > I propose that we ignore [the philosophy] and get on with the > engineering. [We already know how] to build systems that interact as > though they were conscious. Even if a criterion could be devised to > tell whether X is "actually" conscious, not just "seemingly" conscious, > we don't need it to build [functionally] conscious systems. Except that we DON'T already know how. This ought to read: "We should get down to trying" to build systems that can pass the Total Turing Test (TTT) -- i.e., are completely performance-indistinguishable from conscious creatures like ourselves. Also, there is (and can be) no other functional criterion than the TTT, so "seemingly" conscious is as close as we will ever get. Hence there's nothing gained (and a lot masked and even lost) from focusing on interpreting trivial performance as conscious instead of on strengthening it. What we should ignore is conscious interpretation: That's a good philosophy. And I've dubbed it "methodological epiphenomenalism." > Thus, the evolutionary advantage of consciousness in primates (the > actuality as well as the appearance) is that it facilitates such social > interactions as communication and cooperation. The advantage of > building consciousness into computer programs (now I refer to the > appearance, since I can't empathize with a computer program) is the > same: to facilitate communication and cooperation. This simply does not follow from the foregoing (in fact, it's at odds with it). Not even a hint is given about the FUNCTIONAL advantage (or even the functional role) of either actually being conscious or even of appearing conscious. "Communication-and-cooperation" -- be it ever as "seemingly conscious" as you wish -- does not answer the question about what functional role consciousness plays, it simply presupposes it. Why aren't communication and cooperation accomplished unconsciously? What is the FUNCTIONAL advantage of conscious communication and cooperation? How we feel about one another and about the devices we build is beside the point (except for the informal TTT). It concerns the phenomenological and ontological fact of consciousness, not its functional role, which (if there were any) would be all that was relevant to mind engineering. That's methodological epiphenomenalism. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet