[comp.ai.philosophy] You can't get semantics by playing with syntax.

rhb@cbnewse.att.com (richard.h.bradley) (11/20/90)

The argument can be suggested by considering an analogy used elsewhere.
As digestion requires chemical interaction with a substrate, so thought
requires semantical interaction with some object.  Formal simulation -
incremental description of the process - will not digest an apple or
create an idea.

This is of course not a logical argument.  Nevertheless, it is a statement
that may be true or false, and perhaps deserves attention.

Through I/O devices, models of formal systems are able to interact with
external physical objects.  Thus internal syntactical operations
are able to affect and be affected by external things.  This critical I/O
interface introduces all the semantics that should be relevant.  (Of course
a typewriter interface will not produce the same semantics as a rod/cone/
muscle/sinew interface.)

The correctness or error of the subject statement may turn out to be
unimportant.  If the output of the entire system is best explained as an
interaction of thoughts, ideas, and circumstances, then it seems practical
to ascribe thoughts and ideas to the system.  Although to affect the world
an idea must have substance, the particular substance would seem not to be
significant.  (Perhaps a dualist will disagree.)

Dick Bradley
att!cbnewse!rhb

cpshelley@violet.uwaterloo.ca (cameron shelley) (11/20/90)

In article <1990Nov19.215824.7547@cbnewse.att.com> rhb@cbnewse.att.com (richard.h.bradley) writes:
>
>The argument can be suggested by considering an analogy used elsewhere.
>As digestion requires chemical interaction with a substrate, so thought
>requires semantical interaction with some object.  Formal simulation -
>incremental description of the process - will not digest an apple or
>create an idea.
>
[...]

>Through I/O devices, models of formal systems are able to interact with
>external physical objects.  Thus internal syntactical operations
>are able to affect and be affected by external things.  This critical I/O
>interface introduces all the semantics that should be relevant.
>
  This leaves unaddressed the fact that the objects of consideration
may not exist and therefore have never truly presented themselves to 
the I/O interface (I am speaking of intelligent creatures here of course).
The ability to anticipate, and therefore deal *meaningfully* with
things that may never be or occur is, I think, a vital component of
any model of thought.  Also, by saying that I/O introduces all the semantics
that "should be relevant" (to what?), you seem to be arguing against
any form of innate knowledge.  Are you therefore suggesting a 
behaviourist model of learning?

>The correctness or error of the subject statement may turn out to be
>unimportant.  If the output of the entire system is best explained as an
>interaction of thoughts, ideas, and circumstances, then it seems practical
>to ascribe thoughts and ideas to the system.  Although to affect the world
>an idea must have substance, the particular substance would seem not to be
>significant.  (Perhaps a dualist will disagree.)
>
  If by "substance", you mean the medium in which thoughts are processed,
then I agree.  I am unsure if you can describe the output of the entire
system in terms of the high-level entities like "thoughts" or "ideas".
The gap between volition (to use a single example) and execution is 
quite large for non trivial systems like people.  That is to say, the
thought "I am hungry, so I'll go to the store and get something to eat"
and the realization of the the necessary sequence of actions are very
different, yet it is the actions which are observed at the "output".

  What is important to note then, is that both input and output are
structured, but "thoughts" are still at liberty to ignore the structure
(the imagining implied above).  But since people are apparently capable
of acting on "input" never received, ie. translating a pure "idea"
into a structured series of actions, I think we can still conclude
that the semantics of "thoughts" or "ideas" are related to observable
structures - but not necessarily in some obvious or even direct 
fashion.

--
      Cameron Shelley        | "Logic, n.  The art of thinking and reasoning
cpshelley@violet.waterloo.edu|  in strict accordance with the limitations and
    Davis Centre Rm 2136     |  incapacities of the human misunderstanding..."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

rhb@cbnewse.att.com (richard.h.bradley) (11/21/90)

In article <1990Nov20.151400.2252@watdragon.waterloo.edu>
		cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>In article <1990Nov19.215824.7547@cbnewse.att.com> rhb@cbnewse.att.com (richard.h.bradley) writes:

>[...]

>>Through I/O devices, models of formal systems are able to interact with
>>external physical objects.  Thus internal syntactical operations
>>are able to affect and be affected by external things.  This critical I/O
>>interface introduces all the semantics that should be relevant.
>>
>  This leaves unaddressed the fact that the objects of consideration
>may not exist and therefore have never truly presented themselves to 
>the I/O interface (I am speaking of intelligent creatures here of course).
>The ability to anticipate, and therefore deal *meaningfully* with
>things that may never be or occur is, I think, a vital component of
>any model of thought.

I may have led you to believe that I think semantics exist only at the
external interface.  That was not my intent.  The problem was to get some
meaning into the formal manipulations.  Once the semantics are in the door,
all your imaginings can be meaningful.  (Your later statement seems to
agree with this.)

>   Also, by saying that I/O introduces all the semantics
>that "should be relevant" (to what?), you seem to be arguing against
>any form of innate knowledge.  Are you therefore suggesting a 
>behaviourist model of learning?

It may be worthwhile to distinguish different forms of innate knowledge.
Certainly the structure of the formal system could be considered innate.
You may be able to exhibit a form of innate knowledge that is not inherent
in that structure.  Then we might argue about the boundaries of the system.
When considering humans, it may be hard to draw a clean line between
the inputs and the system.  One might put hunger pangs on either side.
Depending on the purpose of the discussion, even memory might be considered
as I/O.  I don't think I'm implying anything about learning mechanisms.

>[ ... ]

>--
>      Cameron Shelley        | "Logic, n.  The art of thinking and reasoning
>cpshelley@violet.waterloo.edu|  in strict accordance with the limitations and
>    Davis Centre Rm 2136     |  incapacities of the human misunderstanding..."
> Phone (519) 885-1211 x3390  |				Ambrose Bierce

--
Dick Bradley
att!ihlpl!rhb