mmt@seismo.CSS.GOV@dciem.UUCP (01/27/87)
Newsgroups: mod.ai Subject: Re: Minsky on Mind(s) Summary: Expires: References: <8701221730.AA04257@seismo.CSS.GOV> Sender: Reply-To: mmt@dciem.UUCP (Martin Taylor) Followup-To: Distribution: Organization: D.C.I.E.M., Toronto, Canada Keywords: I tried to send this direct to Steve Harnad, but his signature is incorrect: seismo thinks princeton is an "unknown host". Also mail to him through allegra bounced. =============== >just answer the following question: When the dog's tooth is injured, >and it does the various things it does to remedy this -- inflamation >reaction, release of white blood cells, avoidance of chewing on that >side, seeking soft foods, giving signs of distress to his owner, etc. etc. >-- why do the processes that give rise to all these sequelae ALSO need to >give rise to any pain (or any conscious experience at all) rather >than doing the very same tissue-healing and protective-behavioral job >completely unconsciously? Why is the dog not a turing-indistinguishable >automaton that behaves EXACTLY AS IF it felt pain, etc, but in reality >does not? That's another variant of the mind/body problem, and it's what >you're up against when you're trying to justify interpreting physical >processes as conscious ones. Anything short of a convincing answer to >this amounts to mere hand-waving on behalf of the conscious interpretation >of your proposed processes.] I'm not taking up your challenge, but I think you have overstated the requirements for a challenge. Okham's razor demands only that the simplest explanation be accepted, and I take this to mean inclusive of boundary conditions AND preconceptions. The acceptability of a hypothesis must be relative to the observer (say, scientist), since we have no access to absolute truth. Hence, the challenge should be to show that the concept of consciousness in the {dog|other person|automaton} provides a simpler description of the world than the elimination of the concept of consciousness does. The whole-world description includes your preconceptions, and a hypothesis that demands you to change those precoceptions is FROM YOUR VIEWPOINT more complex than one that does not. Since you start from the preconception that consciousness need not (or perhaps should not) be invoked, you need stronger proof than would, say, an animist. Your challenge should ask for a demonstration that the facts of observable behaviour can be more succinctly described using consciousness than not using it. Obviously, there can be no demonstration of the necessity of consciousness, since ALL observable behaviour could be the result of remotely controlled puppetry (except your own, of course). But this hypothesis is markedly more complex than a hypothesis derived from psychological principles, since every item of behaviour must be separately described as part of the boundary conditions. I have a mathematization of this argument, if you are interested. It is about 15 years old, but it still seems to hold up pretty well. Ockham's razor isn't just a good idea, it is informationally the correct means of selecting hypotheses. However, like any other razor, it must be used correctly, and that means that one cannot ignore the boundary conditions that must be stated when using the hypothesis to make specific predictions or descriptions. Personally, I think that hypotheses that allow other people (and perhaps some animals) to have consciousness are simpler than hypotheses that require me to describe myself as a special case. Hence, Ockham's razor forces me to prefer the hypothesis that other beings have consciousness. The same does not hold true for silicon-based behaving entities, because I already have hypotheses that explain their behaviour without invoking consciousness, and these hypotheses already include the statement that silicon-based beings are different from me. Any question of silon-based consciousness must be argued on a different basis, and I think such arguments are likely to turn on personal preference rather than on the facts of behaviour.