harnad@mind.UUCP (Stevan Harnad) (10/26/86)
In mod.ai, Message-ID: <861016-071607-4573@Xerox>, "charles_kalish.EdServices"@XEROX.COM writes: > About Stevan Harnad's two kinds of Turing tests [linguistic > vs. robotic]: I can't really see what difference the I/O methods > of your system makes. It seems that the relevant issue is what > kind of representation of the world it has. I agree that what's at issue is what kind of representation of the world the system has. But you are prejudging "representation" to mean only symbolic representation, whereas the burden of the papers in question is to show that symbolic representations are "ungrounded" and must be grounded in nonsymbolic processes (nonmodularly -- i.e., NOT by merely tacking on autonomous peripherals). > While I agree that, to really understand, the system would need some > non-purely conventional representation (not semantic if "semantic" > means "not operable on in a formal way" as I believe [given the brain > is a physical system] all mental processes are formal then "semantic" > just means governed by a process we don't understand yet), giving and > getting through certain kinds of I/O doesn't make much difference. "Non-purely conventional representation"? Sounds mysterious. I've tried to make a concrete proposal as to just what that hybrid representation should be like. "All mental processes are formal"? Sounds like prejudging the issue again. It may help to be explicit about what one means by formal/symbolic: Symbolic processing is the manipulation of (arbitrary) physical tokens in virtue of their shape on the basis of formal rules. This is also called syntactic processing. The formal goings-on are also "semantically interpretable" -- they have meanings; they are connected to objects in the outside world that they are about. The Searle problem is that so far the only devices that do semantic interpretations intrinsically are ourselves. My proposal is that grounding the representations nonmodularly in the I/O connection may provide the requisite intrinsic semantics. This may be the "process we don't understand yet." But it means giving up the idea that "all mental processes are formal" (which in any case does not follow, at least on the present definition of "formal," from the fact that "the brain is a physical system"). > Two for instances: SHRDLU operated on a simulated blocks world. The > modifications to make it operate on real blocks would have been > peripheral and not have affected the understanding of the system. This is a variant of the "Triviality of Transduction (& A/D, & D/A, and Effectors)" Argument (TT) that I've responded to in another iteration. In brief, it's toy problems like SHRDLU that are trivial. The complete translatability of internal symbolic descriptions into the objects they stand for (and the consequent partitioning of the substantive symbolic module and the trivial nonsymbolic peripherals) may simply break down, as I predict, for life-size problems approaching the power to pass the Total Turing Test. To put it another way: There is a conjecture implicit in the solutions to current toy/microworld problems, namely, that something along essentially the same lines will suitably generalize to the grown-up/macroworld problem. What I'm saying amounts to a denial of that conjecture, with reasons. It is not a reply to me to simply restate the conjecture. > Also, all systems take analog input and give analog output. Most receive > finger pressure on keys and return directed streams of ink or electrons. > It may be that a robot would need more "immediate" (as opposed to > conventional) representations, but it's neither necessary nor sufficient > to be a robot to have those representations. The problem isn't marrying symbolic systems to any old I/O. I claim that minds are "dedicated" systems of a particular kind: The kind capable of passing the Total Turing Test. That's the only necessity and sufficiency in question. And again, the mysterious word "immediate" doesn't help. I've tried to make a specific proposal, and I've accepted the consequences, namely, that it's just not going to be a "conventional" marriage at all, between a (substantive) symbolic module and a (trivial) nonsymbolic module, but rather a case of miscegenation (or a sex-change operation, or some other suitably mixed metaphor). The resulting representational system will be grounded "bottom-up" in nonsymbolic function (and will, I hope, display the characteristic "hybrid vigor" that our current pure-bred symbolic and nonsymbolic processes lack), as I've proposed (nonmetaphorically) in the papers under discussion. Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771