[comp.ai] Searle and Radical Translation

kenp@ntpdvp1.UUCP (Ken Presting) (07/19/90)

In article <1595@oravax.UUCP>, daryl@oravax.UUCP (Steven Daryl McCullough) writes:
> In article <602@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes:
> 
> > It is this: We know for a fact that brains think.  We don't know at
> > all whether anything else will ever think.  Nobody in his right mind
> > would deny the first assertion.  That is the extent of the
> > "superiority of biological implementations."
> 
> . . .  I would hesitate to say that "we know for a fact that brains
> think" until we agree on what counts for knowledge in something like
> this. . . .

Sorry to have been obscure - I was hoping merely to put to rest an 
unproductive issue.

By "Brains Think" I of course meant "Brains Can Think".  Anyone who denies
this thereby implies either that he is not thinking (which is absurd), or 
that he is thinking with something other than his brain (his organs of 
succession, perhaps).

There are, of course, people with brains that can think, but instead of
using the capacity to think for themselves, merely quote or misquote the 
thoughts of others.
> 
> However, Searle denies the validity of using behavior as a test for
> intelligence. What else is there? Well, introspection: I know that I
> think because I experience myself thinking. This introspection doesn't
> get me any closer to knowing that *brains* (plural) think; only that
> *I* think.

This is either a misreading of Searle, or an (unwarranted) assumption about
the implications of his views.  He is concerned *only* with programs
and the consequenceds of implementing them.  In particular, he denies
that implementing a program attaches any semantics to the symbols it 
prints.

Searle does not say much about how to determine whether symbols have any
semantics attached, but he is unlikely to object strongly to the "Radical
Interpretation" of Davidson (and Quine), which is well-known and very 
influential among philsophers of language.

Radical Interpretation *is* based on observations of behavior.  (Cf. WVO
Quine, _Word and Object_, and D. Davidson, _Inquiries into Truth and 
Interpretation_)

Radical Translation involves adopting the hypothesis that the symbols
generated by a system are *true sentences* (among other assumptions), and
iteratively modifying a tentative semantics for the system, hoping to 
preserve the hypothesis.  The interpretation "succeeds" if some symbol
in the system's vocabulary gets assigned the semantics of the concept 
"truth".  (This is a very quick sketch of a very complex and interesting
theory, which was never intended to be applied to AI.)

Note that no mere implementation of a Turing Machine could pass this test,
because truth (like "halting") cannot be represented by a computable
function.  IMO, real computers are more than mere TM implementations, so
Tarksi's theorem does not prohibit real machines from representing truth.


Ken Presting   

daryl@oravax.UUCP (Steven Daryl McCullough) (07/19/90)

In article <606@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes:
> In article <1595@oravax.UUCP>, daryl@oravax.UUCP (Steven Daryl McCullough) writes:
> > In article <602@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes:
> > 
> > > It is this: We know for a fact that brains think.  We don't know at
> > > all whether anything else will ever think.  Nobody in his right mind
> > > would deny the first assertion.  That is the extent of the
> > > "superiority of biological implementations."
> > 
> > . . .  I would hesitate to say that "we know for a fact that brains
> > think" until we agree on what counts for knowledge in something like
> > this. . . .
> 
> Sorry to have been obscure - I was hoping merely to put to rest an 
> unproductive issue.
> 
> By "Brains Think" I of course meant "Brains Can Think".  Anyone who denies
> this thereby implies either that he is not thinking (which is absurd), or 
> that he is thinking with something other than his brain (his organs of 
> succession, perhaps).

Ken, I notice that your responses are getting more irritable. I
apologize if it is because you think I am just being difficult.  I'm
not trying to be difficult. Honest.

I wasn't questioning what you mean by "brains think"; I was
questioning your claim that "We know for a fact that..." That is what
the Chinese Room is all about, ultimately: what counts as evidence
that a system thinks or understands. My claim was that Searle uses
a double standard; his criterion for evidence that a computer program
thinks is much stronger than his criterion for evidence that other
human beings think.

> > However, Searle denies the validity of using behavior as a test for
> > intelligence. What else is there? Well, introspection: I know that I
> > think because I experience myself thinking. This introspection doesn't
> > get me any closer to knowing that *brains* (plural) think; only that
> > *I* think.
> 
> This is either a misreading of Searle, or an (unwarranted) assumption about
> the implications of his views.

You misunderstood the above paragraph. The only idea that I am
attributing to Searle is that he "denies the validity of using
behavior as a test for intelligence". The rest of the paragraph is my
own thoughts.

> He is concerned *only* with programs and the consequences of
> implementing them.  In particular, he denies that implementing a
> program attaches any semantics to the symbols it prints.

Yes, that is exactly what I mean by Searle's double standard. He
insists that behavior (implementing a particular program) is not
sufficient for establishing understanding on the part of computers,
but it seems to me that behavior is exactly what we have to go on in
deciding that other people have understanding. The idea of the Turing
test is that to be fair, we should apply the same criterion for
understanding to machines that we do to all people other than ourselves:
we let judge based on behavior.

For ourselves, we have additional information, namely introspection.

Daryl McCullough

daryl@oravax.UUCP (Steven Daryl McCullough) (07/19/90)

In article <606@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes:

> Searle does not say much about how to determine whether symbols
> have any semantics attached, but he is unlikely to object strongly
> to the "Radical Interpretation" of Davidson (and Quine), which is
> well-known and very influential among philosophers of language.

> Radical Interpretation *is* based on observations of behavior.  (Cf.
> WVO Quine, _Word and Object_, and D. Davidson, _Inquiries into Truth
> and Interpretation_) Radical Translation involves adopting the
> hypothesis that the symbols generated by a system are *true
> sentences* (among other assumptions), and iteratively modifying a
> tentative semantics for the system, hoping to preserve the
> hypothesis. The interpretation "succeeds" if some symbol in the
> system's vocabulary gets assigned the semantics of the concept
> "truth".

This is a very interesting idea until the last sentence. Why not say
that the interpretation succeeds if there is any interpretation which
makes all the sentences true? Why must there be a word for the notion
of truth?

> (This is a very quick sketch of a very complex and interesting theory,
> which was never intended to be applied to AI.)
> Note that no mere implementation of a Turing Machine could pass
> this test, because truth (like "halting") cannot be represented by a
> computable function.

Now I see why you added the last sentence! You had your theorem "no
mere Turing Machine may pass the test" in mind before you set out the
rules.

> IMO, real computers are more than mere TM implementations, so
> Tarksi's theorem does not prohibit real machines from representing
> truth.

Ken, IMHO, it is completely bogus to use Tarski's theorem to "prove"
that there is some notion of truth that physical objects can know but
Turing machines can't. Tarski's undefinability of truth has just as
much force when applied to humans:

     Theorem: No human being can have a definition of truth and still
     be consistent.

     Proof: Consider the sentence "This statement is not true". There
     is no consistent truth assignment to the sentence if we are to
     assign the word "true" its usual meaning.

My guess is the human's are just inconsistent.

Notice, however, that if we weaken our claim to having a definition of
absolute truth to having a subjective definition of truth, then the
paradox is resolved:

     Consider the sentence: "This sentence will never be considered
     true by Ken Presting"

The above sentence could very well be true, but if so, then there is
at least one sentence that is true, but Ken Presting doesn't consider
to be true. Therefore Ken Presting doesn't have a definition of real
truth. On the other hand, the above sentence could be false. Then Ken
Presting will consider it to be true, and so again Ken Presting's
notion of truth is not real truth.

Daryl McCullough

dave@cogsci.indiana.edu (David Chalmers) (07/19/90)

In article <606@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:

>Searle does not say much about how to determine whether symbols have any
>semantics attached, but he is unlikely to object strongly to the "Radical
>Interpretation" of Davidson (and Quine), which is well-known and very 
>influential among philsophers of language.
>
>Radical Interpretation *is* based on observations of behavior.  (Cf. WVO
>Quine, _Word and Object_, and D. Davidson, _Inquiries into Truth and 
>Interpretation_)

Searle is, and has been for more than 20 years, one of the most prominent
opponents of Radical Translation/Interpretation.  I can't imagine anybody
whose views are more diametrically opposed to this theory than Searle.
Please get your facts straight.

All this, of course, is grounded in Searle's complete rejection of
behaviourism of any sort (including behavioural criteria for the
ascription of semantic contents).  Searle believes in "intrinsic
intentionality", i.e. that semantic contents are intrinsic to our
brains and fully determined.  (See his book _Intentionality_ for
many arguments against underdetermination.)

Despite Searle's protestations, however, intentionality (i.e. semantics)
is irrelevant to the Chinese Room argument.  The argument is an argument
about *consciousness* (phenomenology, qualia, subjective experience, pick
your favourite term), and even if the argument were sound, the only
conclusion that it would establish is that implementing program P is
not sufficient for consciousness.  On the face of it, semantics is
a quite separate issue.

The Chinese Room argument only becomes an argument about semantics when
combined with the premise "Consciousness is a necessary prerequisite
for true semantics."  Searle believes this, most other philosophers do
not; on the face of it the two phenomena are independent.  In the 1980
paper, Searle doesn't even bother to explicitly state this premise, but
assumes it.  In the absence of convincing argument for the premise, all
arguments about the Chinese Room should be stated in terms of consciousness,
not semantics.  The "syntax/semantics" argument just confuses the issue.

--
Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable"

kenp@ntpdvp1.UUCP (Ken Presting) (07/20/90)

> > > (Daryl McCullough) writes:
> > > However, Searle denies the validity of using behavior as a test for
> > > intelligence. What else is there? Well, introspection: I know that I
> > > think because I experience myself thinking. This introspection doesn't
> > > get me any closer to knowing that *brains* (plural) think; only that
> > > *I* think.
> >
> > (Ken Presting) writes:
> > This is either a misreading of Searle, or an (unwarranted) assumption about
> > the implications of his views.
> 
> You misunderstood the above paragraph. The only idea that I am
> attributing to Searle is that he "denies the validity of using
> behavior as a test for intelligence". 

He does NOT deny this, and neither do I!  (I happen to think the Turing Test
is too simplistic, but the Radical Translation procedure  advocate is every
bit as behavioral).  Searle assumes "Minds have semantics" and "Programs
can't generate semantics".  It follows immediately that "Programs can't
generate minds."   This has nothing to do with the question of how you decide 
whether a specific system has a mind, or any semantics.
l
> 
> > He is concerned *only* with programs and the consequences of
> > implementing them.  In particular, he denies that implementing a
> > program attaches any semantics to the symbols it prints.
> 
> Yes, that is exactly what I mean by Searle's double standard. He
> insists that behavior (implementing a particular program) is not
> sufficient for establishing understanding on the part of computers,
> but it seems to me that behavior is exactly what we have to go on in
> deciding that other people have understanding. The idea of the Turing
> test is that to be fair, we should apply the same criterion for
> understanding to machines that we do to all people other than ourselves:
> we let judge based on behavior.

I think you are conflating two distinct issues: the truth-conditions for
a statement, and the types of evidence that can support the statement.
The Turing Test claims that a certain sort of evidence should be adequate
to establich the truth of the claim that a certain computer is thinking.
Searle does not care (much) about the kind of evidence that we should look
for in deciding the intelligence of computers.   He is just saying that
whatever kind of evidence we look for, it had better support the claim
that the computer has semantics.  

Quine and Davidson *have* addressed the issue of what sort of evidence
can establish the semantics of a symbol-manipulation system.  They also
believe (and I agree) that the Radical Translation/Interpretation process
is how we each justify our confidence that our fellow humans mean what
they seem to be saying.

There is no double standard.


Ken Presting

daryl@oravax.UUCP (Steven Daryl McCullough) (07/21/90)

In article <607@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes:
> > (Daryl McCullough) writes:
> >
> >The only idea that I am attributing to Searle is that he "denies the
> >validity of using behavior as a test for intelligence".
> 
> He does NOT deny this, and neither do I!

Ken, you have been defending Searle for too long; you are starting to
confuse your own beliefs with his. The whole *point* of the Chinese
room argument was that behavioral tests for intelligence could *never*
be enough. He assumed at the outset that the Chinese room was able to
pass the Turing Test; that behaviorally it was indistinguishable from
a human being.

Don't claim to speak for Searle unless you are willing to give
evidence: where, in anything that Searle has said, did he indicate
that he accepted using behavior as a test for intelligence? Where did
he give any indication that he believed in your Radical Translation
stuff? To me, these two positions seem diametrically opposed to what
Searle was saying.

> (I happen to think the Turing Test is too simplistic, but the
> Radical Translation procedure advocate is every bit as behavioral).

WHOEVER SAID THAT SEARLE BELIEVED IN THE RADICAL TRANSLATION
PROCEDURE???

The way you described it, radical translation is a special case of the
Turing Test. Any behavioral test you want to perform is perfectly
permissable in the Turing Test. If by performing the radical
translation one can determine that the entity behind the curtain is a
thinking being, then said being passes the Turing test.

> Searle assumes "Minds have semantics" and "Programs
> can't generate semantics". It follows immediately that "Programs can't
> generate minds." This has nothing to do with the question of how you decide 
> whether a specific system has a mind, or any semantics.

Ken, I think you are distorting Searle's already dubious argument. I
thought the Chinese room argument was supposed to *prove* that
"programs can't generate semantics". If he assumed it at the start,
then what was his argument supposed to show?

Anyway, the Chinese room argument certainly *was* about "how you
decide whether a specific system has a mind". The system in question
was the Chinese room itself. Does it have a mind separate from
Searle's or not? Searle claimed not, but my question is, on what
basis?

> Quine and Davidson *have* addressed the issue of what sort of evidence
> can establish the semantics of a symbol-manipulation system.  They also
> believe (and I agree) that the Radical Translation/Interpretation process
> is how we each justify our confidence that our fellow humans mean what
> they seem to be saying.
> 
> There is no double standard.

Ken, this is absolutely incredible! The paragraph above, which you
claim supports (or at least, does not contradict) Searle's position,
in fact *refutes* Searle's position. Searle assumed at the outset that
the Chinese room passed every behavior criterion for understanding
Chinese. And yet, Searle claimed that the room did not understand
Chinese. Searle is therefore explicitly *not* using any kind of
behavioral test for intelligence!

Ken, your arguments are certainly confusing, if not contradictory.

kenp@ntpdvp1.UUCP (Ken Presting) (07/23/90)

In article <1606@oravax.UUCP>, daryl@oravax.UUCP (Steven Daryl McCullough) writes:
> In article <607@ntpdvp1.UUCP>, kenp@ntpdvp1.UUCP (Ken Presting) writes:
> > > (Daryl McCullough) writes:
> > >
> > >The only idea that I am attributing to Searle is that he "denies the
> > >validity of using behavior as a test for intelligence".
> > 
> > He does NOT deny this, and neither do I!
> 
> Ken, you have been defending Searle for too long; you are starting to
> confuse your own beliefs with his. The whole *point* of the Chinese
> room argument was that behavioral tests for intelligence could *never*
> be enough. He assumed at the outset that the Chinese room was able to
> pass the Turing Test; that behaviorally it was indistinguishable from
> a human being.

I'M NOT DEFENDING SEARLE!  Not only do I believe that it is possible for
implementing the right program to constitute intelligence, I am confident
we will figure out how to write it.  I believe that the Systems Reply is
mistaken, but that does not imply that I agree with any (other) point Searle
is trying to make.

More to the point, passing the Turing Test does not entail that the system
which passes is "behaviorally indistinguishable from a human being".  I
agree that the linguistic behavior which is observed in the Turing test is
crucial to intelligence.  But there are certainly many other observable
parts of behavior (such as perception and motion), which the Turing test
ignores.  Furthermore, the Turing test does not specify in any great detail
how the questioning is to be conducted, which IMO is much more important
than, eg, the duration of the test.

Now, from Searle's argument it does follow that the Turing test is not
adequate to verify the production of intelligence.  That, however, is not
the point of the CR.  The "point" of an argument is surely its conclusion,
and the conclusion of the CR argument is that implementing a program cannot
constitute intelligence.

> 
> Don't claim to speak for Searle unless you are willing to give
> evidence: where, in anything that Searle has said, did he indicate
> that he accepted using behavior as a test for intelligence? Where did
> he give any indication that he believed in your Radical Translation
> stuff? To me, these two positions seem diametrically opposed to what
> Searle was saying.

Searle mentions behavior-based ascription of intelligence in Minds, Brains,
and Programs, under "The Combination Reply":

      If we could build a robot whose behavior was indistinguisahble over
   a large range from human behavior, we would [rationally] attribute
   intentionality to it, pending some reason not to. . . .
      But I really don't see that this is any help to the claims of Strong
   AI, and here's why:  According to Strong AI, instantiating a formal
   program with the right input and output is a sufficient condition ...
   of intentionality.

Searle, is *not* diametrically opposed to behavioral evidence for
intelligence.  He doesn't care about it (much).

> 
> > (I happen to think the Turing Test is too simplistic, but the
> > Radical Translation procedure advocate is every bit as behavioral).
> 
> WHOEVER SAID THAT SEARLE BELIEVED IN THE RADICAL TRANSLATION
> PROCEDURE???
> 
> The way you described it, radical translation is a special case of the
> Turing Test. Any behavioral test you want to perform is perfectly
> permissable in the Turing Test. If by performing the radical
> translation one can determine that the entity behind the curtain is a
> thinking being, then said being passes the Turing test.

As Dave Chalmers pointed out, Searle is opposed (at least) to the use of
radical translation as an exhaustive technique for determining the meaning
of words, because it leads to indeterminacy.  Intrinsic intentionality would
avoid such indeterminacy, and Searle explicitly advocates intrinsic intentions.

This is splitting hairs, but there is a difference between determining *what
a word means* and determining *whether it is meaningful*.  Those who disavow
indeterminacy in the first case need not be opposed to it in the second case.
(since no claim regarding specific meaning need be made)  The CR depends only
on meaningfulness, as I read it.

I tend to agree that radical translation *could* be part of the Turing Test,
but not as part of the usual scenario of blind Q & A.  Radical translation
depends on (eg) knowing about the environment of the questionee, and
comparing its description of its suroundings to one's own observations. It
may be possible to treat the teletype interface itself as a "shared
environment", and ask questions like "How many seconds did it take to type
this question?".  (Note that no "mere TM implementation" could answer such
a question, because TM's have no clock.  Note especially that reading a
clock is not just "symbol manipulation".)

My main objection to Turing's proposal is that it needs to be sharpened up
quite a bit.  Radical translation is a specific method, which could be applied
in some interpretations of what Turing said.  I claim that no other behavior
based method would give much support at all to a claim of intelligence.  As
I understand the Turing test, it does not *require* radical translation, and
therefore is too lenient.

> 
> > Searle assumes "Minds have semantics" and "Programs
> > can't generate semantics". It follows immediately that "Programs can't
> > generate minds." This has nothing to do with the question of how you decide 
> > whether a specific system has a mind, or any semantics.
> 
> Ken, I think you are distorting Searle's already dubious argument. I
> thought the Chinese room argument was supposed to *prove* that
> "programs can't generate semantics". If he assumed it at the start,
> then what was his argument supposed to show?

I just combined Searle's Axioms 1 and 3 from Sci. Am. p.27 (these are
verbatim quotes):

   Axiom 1:  Computer programs are formal (syntactic)
   Axiom 2:  Human minds have mental content (semantics)
   Axiom 3:  Syntax by itself is neither constitutive of nor
             sufficient for semantics.
   Conclusion 1:  Programs are neither constitutive of nor sufficient for
                  minds.

Anybody who claims that Searle is making a trivial logical error should
look again at this argument.  It is a very simple syllogism, which has
nothing to do with consciousness, collecting behavioral evidence, or
anything other than programs and semantics.

I think Axiom 1 is false, and I can see how some people would object to
Axiom 2 (it is rather vague).  If I follow you, Daryl, you would say
that even if the argument were OK, the conclusion is irrelevant, because
a system consists of interface documentation as well as programmed
hardware.  The problem is that I'm not sure you actually disagree with
Searle - both you and he insist that something besides the program be
added to the hardware.

> 
> Anyway, the Chinese room argument certainly *was* about "how you
> decide whether a specific system has a mind". The system in question
> was the Chinese room itself. Does it have a mind separate from
> Searle's or not? Searle claimed not, but my question is, on what
> basis?

On the basis that it contained no semantics.  If you accept Axiom 2,
then that's all he needs.

One of the things that intrigues me personally about the CR is that it
*avoids* discussing any particular evidence-gathering procedure for
deciding the presence of thought.  It is an _a priori_ objection to
an _a priori_ claim - "Running this program guarantees thought".  The
Turing test gets mentioned but is never the object of much attention
in Searle's articles.

What it's all about is "What does a program say about the system that
implements it?"  According to Searle, the program has nothing to say.
You seem to agree that programs have an "operational semantics", but
I think that topic also needs some additional detail.  Since semantics
is a relation between words and things, the program must specify just
such a relationship.  But that's another thread.


Ken Presting  ("And I don't mean Speech Acts")

kenp@ntpdvp1.UUCP (Ken Presting) (07/26/90)

> (Daryl McCullough) writes:
>
> > (Ken Presting) writes:
> > . . .  The interpretation "succeeds" if some symbol in the
> > system's vocabulary gets assigned the semantics of the concept
> > "truth".
>
> This is a very interesting idea until the last sentence. Why not say
> that the interpretation succeeds if there is any interpretation which
> makes all the sentences true? Why must there be a word for the notion
> of truth?

The main reason is that "truth" is something we are all capable of talking
about, and we define most logical concepts in terms of it.  Also, I doubt 
that you would get anywhere near a unique interpretation for non-semantic
terms, unless you reach a point where you can discuss semantics with your
interlocutor.

My guess is that (for Davidson) the ultimate motive is Kantian - if you
can establish that some organism can use logic and be swayed by arguments,
then you have a moral duty to reason with it, instead of merely exploiting
it.

Anything that understands the concept of truth can, on that basis, understand
validity, soundness, truth-conditions and truth-functions.

> > IMO, real computers are more than mere TM implementations, so
> > Tarksi's theorem does not prohibit real machines from representing
> > truth.
>
> Ken, IMHO, it is completely bogus to use Tarski's theorem to "prove"
> that there is some notion of truth that physical objects can know but
> Turing machines can't. Tarski's undefinability of truth has just as
> much force when applied to humans:
>
>      Theorem: No human being can have a definition of truth and still
>      be consistent.
>
>      Proof: Consider the sentence "This statement is not true". There
>      is no consistent truth assignment to the sentence if we are to
>      assign the word "true" its usual meaning.
>
> My guess is the human's are just inconsistent.
>

Tarski's solution is to define separate "truth" predicates in a metalanguage.
(Cf. "The Semantic Conception of Truth").   There are many formal approaches
to resolving the Liar paradox.  Kripke would say that the Liar sentence is
"ungrounded" because it refers only to sentences, and that it has no truth-
value.

I tend to agree with your position, however.  But this position entails that
arguing in natural language is all but pointless.  (No news to Netnews).
To ameliorate the negative consequences of "semantic closure", I also think
that most of natural language is used metaphorically, whenever it appears
in arguments.


But I think it is perfectly obvious that real computers are more than TM's.
Not because they are physical objects, but because they have a permanent
memory.  Each time a TM is given a particular input, the tape is erased
and the internal state is reset.  So the TM computes a *function* - same
input => same output.  That is obviously false for real computers, which
have updateable databases.

Theorems about TM's, which compute functions, do not apply to real machines,
for purely formal reasons.  Real machines do not, in the technical sense,
compute functions.

> Notice, however, that if we weaken our claim to having a definition of
> absolute truth to having a subjective definition of truth, then the
> paradox is resolved:
>
>      Consider the sentence: "This sentence will never be considered
>      true by Ken Presting"
>
> The above sentence could very well be true, but if so, then there is
> at least one sentence that is true, but Ken Presting doesn't consider
> to be true. Therefore Ken Presting doesn't have a definition of real
> truth. On the other hand, the above sentence could be false. Then Ken
> Presting will consider it to be true, and so again Ken Presting's
> notion of truth is not real truth.

Your conclusion follows only if we assume an *extensional* concept of
truth.  For example, if the only way it can be established that "my
conception of truth is real truth" is by establishing that there is
some act (possibly internal), such that I perform that I perform that
act when and only when presented with a true sentence.  Then my
performance of that act (say, "considering P true") would be extensionally
equivalent to "real truth".

But that is not the only way to represent truth, or any other concept, for
that matter.  Even TM's are used more flexibly than this, to enumerate
recursively enumerable sets.

Davidson's idea is based on Tarski's:  represent the truth predicate of
one language with an axiomatic theory stated in a metalanguage.  For
a system to use this method of representation, however, it must have
intentional states with intensional content.

So the idea is that you collect enough evidence to convinve yourself
that your system is correctly using the word "truth", and (here is the
kicker) the word "truth", as used by the system, cannot be interpreted
as having any other meaning.  If you can collect enough evidence to
reach this conclusion, then you *have* to grant that the system has
intentional states.

Whether it is possible to collect enough such evidence is another
question, but it's for sure that a TM, without permanent memory, could
NOT provide the right kind of evidence.  According to Davidson, you
should look for the machine making mistakes, and acknowledging that
they are mistakes (which of course involves trying to correct them, or
avoid them in the future).  A TM with a (partial) extensional
representation for "truth" would of course make many mistakes, and might
well acknowledge them, but could not avoid them in the future.


Ken Presting   ("A heuristic is an algorithm that doesn't work")

kohout@cme.nist.gov (Robert Kohout) (07/26/90)

In article <612@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
>
>But I think it is perfectly obvious that real computers are more than TM's.
>Not because they are physical objects, but because they have a permanent
>memory.  Each time a TM is given a particular input, the tape is erased
>and the internal state is reset.  So the TM computes a *function* - same
>input => same output.  That is obviously false for real computers, which
>have updateable databases.
>
>Theorems about TM's, which compute functions, do not apply to real machines,
>for purely formal reasons.  Real machines do not, in the technical sense,
>compute functions.
>
Interesting. You mean to say that you find it perfectly obvious that
Church's Thesis is false. I wish that you could formalize this - it 
would make you much more famous that this Net haggling could ever do. 
Ken, I'm sure I've "misread" you - it seems to me that you're claiming that
a general purpose computer is more powerful that a TM by virtue of its
permanent memory. Pardon my reading skills, but IF this is what you're
saying, I'd suggest that you re-examine your position. In particular, I
think this "internal reset" argument is a little flaky. Any data store
you can contrive for a "real" machine can be modeled on the tape of a
TM, can it not? Any program that you can run on a general purpose computer
can also be run on a TM, producing the same results, can it not? Exactly
what are you saying, Ken? 

In addition, I fail to see how real machines do not, in the technical sense,
compute functions. I always thought that, in the technical sense, real
machines were equivalent to TM's, and that's why we can talk about TM's
and feel comfortable that we're also talking about about "real" 
machines. Granted, a TM cannot ring bells and flash lights, but that is
clearly NOT what I am reading here. 

Please enlighten me, Ken, and forgive me if I do not find this assertion
at all obvious.


- Bob Kohout

smoliar@vaxa.isi.edu (Stephen Smoliar) (07/28/90)

In article <5362@puggsly.cme.nist.gov> kohout@cme.nist.gov (Robert Kohout)
writes:
>
>In addition, I fail to see how real machines do not, in the technical sense,
>compute functions.

Well, my VAX is about as real a machine as you can get (for better or worse);
but when it is running UNIX, I do not think anyone would say it is computing
a function.  For one thing, it does not halt.  (At least it's not SUPPOSED to
halt, and halting is generally indicative of an error.)  The same is true of
any real-time control system.  While it is certainly true that such systems
have Turing-like functions as components, you cannot say that THE WHOLE THING
is such a function.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

kenp@ntpdvp1.UUCP (Ken Presting) (07/31/90)

In article <5362@puggsly.cme.nist.gov>, kohout@cme.nist.gov (Robert Kohout) writes:
> In article <612@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
> >
> >. . . Each time a TM is given a particular input, the tape is erased
> >and the internal state is reset.  So the TM computes a *function* - same
> >input => same output.  That is obviously false for real computers, which
> >have updateable databases. . . .
> >
> >Theorems about TM's, which compute functions, do not apply to real machines,
> >for purely formal reasons.  Real machines do not, in the technical sense,
> >compute functions.
>
> Interesting. You mean to say that you find it perfectly obvious that
> Church's Thesis is false. I wish that you could formalize this - ...

Some preliminaries:

Take a Turing machine TM = (Q,S,D,q0,qf), where Q is a finite set of "states",
S is a finite set of "characters", D is a relation on ((Q cross S) cross (Q
cross S cross {Left, Right})), and q0, qf (the "initial" and "final" states)
are elements of Q.  TM is "deterministic" iff whenever (x,y) and (x,z) are
both elements of D, then y=z.  Only the deterministic case will be considered.
(Note that Q and S must be non-empty and disjoint, and one element of S is
called 'B', the "blank" symbol.)

Let S* be the set of all "words" (finite sequences) on the alphabet S. Let SQ*
be the set of all words on combined alphabet (S union Q).  Let SS be the set
of all words ss in SQ* such that ('_' is concatenation):

      ss = w1_qn_w2, where w1, w2, are in S*, and qn is in Q.

That is, SS is the "state space" of TM.  Elements of SS are usually called
"configurations" of TM, because for each step in any computation performed by
TM, there is a word in SS such that:

      1. The content of the tape before the step is w1_w2,
      2. The head is in state qn, and
      3. The head is positioned on the first character of w2.

Let |- be a relation on (SS cross SS) such that:

      w1_q_w2 |- w1'_q'_w2'
            iff
      there are a1,a2,a2' in S, v1,v2 in S*, and m in {Left,Right} such that:
      1. w1 = v1_a1 and w2 = a2_v2,
      2. <(q,a2),(q',a2',m)> is in D,
      3. if m is Right, then w1' = w1_a2' and w2' = v2,
         except that if v2 is null, then w2' = B,
      4. if m is Left, then w1' = v1 and w2' = a1_a2'_v2.
         except that if v1 is null, then w1' = B,

That is, TM "prints the symbol m and moves one square to the left or right",
adding blanks as needed, "converting" one state to the "next".

The relation |- defines a directed graph on the state space SS, by taking
(ss,ss') to be an "edge" iff ss |- ss'.  For a deterministic TM, there is
never more than one edge leaving any node (except as noted below), so a
"path" in SS is defined by any two "connected" points:

      ss |-* ss'  (read "ss leads to ss'") iff there exist ss0,...,ssn
                  such that ss |- ss0 |- ... |- ssn |- ss'.

TM "Halts" in a state ss iff there is no ss' such that ss |- ss'.

In the usual formalization of "computing a function by a TM" we consider
an alphabet S={B,1} and say:

      TM computes y=f(x) iff
         for all states of the form q0_w
            (where w is a string of x+1 "ones")
         there is a state qf_w'
            (where w' is a string of y+1 "ones")
         wherein TM halts.

For functions of n arguments, the starting state is q0_w1_B..._B_wn.  Vector-
valued functions are represented using a similar tactic in the final state.

Bob, this is what I understand a Turing machine to be, and I trust you will
agree that these definitions are formally equivalent to any in the familiar
texts.  If not, I hope you will correct me.


TM's with unerased input tapes:

Not all paths in the state space of a machine begin with a configuration
in standard form.  Let '|=' be a relation between paths in the state graph,
defined as follows:

      <ss1,ss2> |= <ss1',ss2'>
            iff
         1. ss1 |-* ss2, and ss1' |-* ss2',
         2. the TM halts in qf in ss2, and
         3. there exists d, o, and i in S* such that
            ss2 = d_qf_o, and ss1' = d_q0_i.

Thus, the '|=' relata can be compared to successive iterations of a
program whose main loop waits for input (w1') synchronously, and maintains
a "database" of some sort (w2).  The "database" is stored as a "tail" on
the input and output tape configurations.

Since the TM is deterministic, the waiting state (if there is one) is a
function of the initial state.  There is no difference in computing power
due to the new arrangement of the tape - the difference is only notational.
The obvious point is that the TM does not, in this notation, compute 'o'
as a function of 'i'.

True, the arrangement of the data on the tape is only a convenience.  The
important concept here is the relation |=.   It defines a directed graph
in the space of *paths*, just as |- defined a graph in the space of *states*.
Each time the old TM was used to compute a value, it moved along a path in
state space.  Not all machines do so in a well-behaved way, of course.
Some paths never terminate, and some that do terminate end in a non-
standard configuration.

Likewise, each time the new TM halts and is restarted with a new input,
it moves along an edge of the graph in path-space.  But this motion is NOT
deterministic - from any termination, there are many possible inputs which
could be provided to the machine, and an edge will represent each possibility.
Call a sequence (possibly infinite) P(i) of paths a "chain" iff for every i,
P(i) |= P(i+1).  The shape of a chain is determined partly by the machine,
and partly by the sequence of inputs.

The intuitively obvious facts captured by the relation |= include:

      1.  A computer is used repeatedly to compute different values.
      2.  The number of times a computer is used is unlimited in
          principle.
      3.  The device may modify its future operation in the course
          of performing the current task.  (by updating the database)
      4.  The sequence of values input to the computer cannot be
          specified in advance.

It may take a moment of thought to verify that while the set of paths
for any TM is denumerable, the set of chains is not.

The most important point here is that we should not suppose that by
identifying computers with Universal Turing Machines we have said anything
very specific about what they can and cannot do.  A Turing machine is a
very flexible formalism.  Many familiar theorems about TM's refer only
to a few of the many ways they can be employed.  Real computers implement
the TM model in an especially powerful way.


Ken Presting  ("I only want to see my name in pixels")

kohout@cme.nist.gov (Robert Kohout) (08/07/90)

To "formally" defend his assertion that "real" computers are "obviously"
more powerful that Turing Machines...

} In article <614@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:

} Some preliminaries:

<TM defined>

}TM's with unerased input tapes:
} 
} Not all paths in the state space of a machine begin with a configuration
} in standard form.  Let '|=' be a relation between paths in the state graph,
} defined as follows:
} 
}       <ss1,ss2> |= <ss1',ss2'>
}             iff
}          1. ss1 |-* ss2, and ss1' |-* ss2',
}          2. the TM halts in qf in ss2, and
}          3. there exists d, o, and i in S* such that
}             ss2 = d_qf_o, and ss1' = d_q0_i.
} 
} Thus, the '|=' relata can be compared to successive iterations of a
} program whose main loop waits for input (w1') synchronously, and maintains
} a "database" of some sort (w2).  The "database" is stored as a "tail" on
} the input and output tape configurations.
} 
} Since the TM is deterministic, the waiting state (if there is one) is a
} function of the initial state.  There is no difference in computing power
} due to the new arrangement of the tape - the difference is only notational.
} The obvious point is that the TM does not, in this notation, compute 'o'
} as a function of 'i'.
} 
Interesting, Ken, but what's the point? Do you intend to imply that this
relation represents something that computers "do" and Turing Machines don't?
Remember, non-determinism adds nothing to the computing power of a TM. If
a non-deterministic TM can compute a function, then there exists a deterministic
machine which can compute the same function.

} 
}       1.  A computer is used repeatedly to compute different values.
}       2.  The number of times a computer is used is unlimited in
}           principle.
}       3.  The device may modify its future operation in the course
}           of performing the current task.  (by updating the database)
}       4.  The sequence of values input to the computer cannot be
}           specified in advance.
} 
None of these are beyond the capacity of a Turing Machine.

} It may take a moment of thought to verify that while the set of paths
} for any TM is denumerable, the set of chains is not.
} 
Do you mean to imply by this that a computer is more powerful than a TM?
Do you REALLY think you are refuting Church's Thesis?

} The most important point here is that we should not suppose that by
} identifying computers with Universal Turing Machines we have said anything
} very specific about what they can and cannot do.  A Turing machine is a
} very flexible formalism.  Many familiar theorems about TM's refer only
} to a few of the many ways they can be employed.  Real computers implement
} the TM model in an especially powerful way.
} 
} 
This seems to me a sort of capitulation - that is, you no longer assert that
it is "perfectly obvious" that a conventional machine is more powerful than
a TM, but rather that "real" computers are "especially powerful" TM's. You
certainly haven't shown anything that computers "do" that TM's don't. For
that matter, you haven't made a single assertion about this |= relation
insofar as it applies to computers and not Turing Machines.

Why don't you just admit it Ken. What was "perfectly obvious" to you, is
just, plain wrong.

} Ken Presting  ("I only want to see my name in pixels")
  
R.Kohout
  

kenp@ntpdvp1.UUCP (Ken Presting) (08/09/90)

In article <5644@puggsly.cme.nist.gov>, kohout@cme.nist.gov (Robert Kohout) writes:
>
> To "formally" defend his assertion that "real" computers are "obviously"
> more powerful that Turing Machines...

Bob, you are now misquoting me.  Let me remind you of my assertion:

  >. . . Each time a TM is given a particular input, the tape is erased
  >and the internal state is reset.  So the TM computes a *function* - same
  >input => same output.  That is obviously false for real computers, which
  >have updateable databases. . . .
  >
  >Theorems about TM's, which compute functions, do not apply to real machines,
  >for purely formal reasons.  Real machines do not, in the technical sense,
  >compute functions.

This assertion is by no means inconsistent with Church's Thesis, nor
does it imply that real computers are more "powerful" than Turing Machines.
It says simply that those real computers which make use of an updateable
database are different from TM's.
 
> } In article <614@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
> 
> } Since the TM is deterministic, the waiting state (if there is one) is a
> } function of the initial state.  There is no difference in computing power
> } due to the new arrangement of the tape - the difference is only notational.
> } The obvious point is that the TM does not, in this notation, compute 'o'
> } as a function of 'i'.
> } 
> Interesting, Ken, but what's the point? Do you intend to imply that this
> relation represents something that computers "do" and Turing Machines don't?

If you will recall the assertion that I set out to formalize, you will perhaps
recognize that I have just made my point:  when there is something on the
tape besides "input", then the "output" is not a function of the "input".
I hope that you can now agree that this is both obvious and trivial.

What is interesting to me is the differences between the relations '|=' and
'|-'.   In the first place, there are non-denumerably many paths defined by
|=, whereas there are only denumerably many defined by |-.  Second, for any
two states s, s' where s |- s', the information content (in the sense of
Kolmogorov-Chaitin complexity) of s' is never greater than that of s.  In
other words, a TM can lose information but never gain any.  This is not so
under the relation |=, which holds not between TM states, but between whole
*calculations*.

Bob, these are formal claims, and I would be delighted to have you or anyone
else correct me if they are mistaken.


> Remember, non-determinism adds nothing to the computing power of a TM. If
> a non-deterministic TM can compute a function, then there exists a deterministic
> machine which can compute the same function.

This fact is familiar to me also.  The relation |= is defined over the
calculations of a deterministic TM, so the D/ND issue is not directly
relevant.  But note that for any calculation <ss1,ss2>, there are infinitely
many other calculations <ss1',ss2'> such that <ss1,ss2> |= <ss1',ss2'>.
This is importantly different from |-, where only a finite number of
configurations s' are accessible from any s.

> 
> }       1.  A computer is used repeatedly to compute different values.
> }       2.  The number of times a computer is used is unlimited in
> }           principle.
> }       3.  The device may modify its future operation in the course
> }           of performing the current task.  (by updating the database)
> }       4.  The sequence of values input to the computer cannot be
> }           specified in advance.
> } 
> None of these are beyond the capacity of a Turing Machine.

It might help to avoid further confusion if you would take the trouble to
explain this claim.

> 
> } It may take a moment of thought to verify that while the set of paths
> } for any TM is denumerable, the set of chains is not.
> } 
> Do you mean to imply by this that a computer is more powerful than a TM?
> Do you REALLY think you are refuting Church's Thesis?

You must have been very excited to think that you found a "Pro-Searlite"
(which I am not) who believes he can refute Church's Thesis (which I do
not).  Sorry to disappoint you, but you have only your reading skills to
blame.

> . . . You
> certainly haven't shown anything that computers "do" that TM's don't. For
> that matter, you haven't made a single assertion about this |= relation
> insofar as it applies to computers and not Turing Machines.

No.  You have asserted without argument that my (1-4) are not "beyond the
capacity of a Turing Machine."  That is pure bluff.  Or perhaps you are
unfamiliar with the concept of "non-denumerable set".

> 
> Why don't you just admit it Ken. What was "perfectly obvious" to you, is
> just, plain wrong.

Bob, it time for *you* to put up or shut up.  I am anxious to discuss the
formal side of the issues raised by the Chinese Room, especially in regard
to radical interpretation, the representation of abstract concepts, and
(most to the point) learning.  I wish you would contribute to such a
discussion.

My impression is that you have no intention of offering us any mathematics.
Your recent claim:

  > ... If it were possible to represent intelligence
  > by a purely formal definition, then of course we could program a purely
  > formal system to be 'intelligent' ...

shows that your ability to apply the ideas of mathematics to the problems
of your profession is severely restricted.


Ken Presting   ("Punchcards on the table")

kohout@cme.nist.gov (Robert Kohout) (08/09/90)

In article <620@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
>
>In article <5644@puggsly.cme.nist.gov>, kohout@cme.nist.gov (Robert Kohout) writes:
>>
>> To "formally" defend his assertion that "real" computers are "obviously"
>> more powerful that Turing Machines...
>
>Bob, you are now misquoting me.  Let me remind you of my assertion:
>
>  >. . . Each time a TM is given a particular input, the tape is erased
>  >and the internal state is reset.  So the TM computes a *function* - same
>  >input => same output.  That is obviously false for real computers, which
>  >have updateable databases. . . .
>  >
>  >Theorems about TM's, which compute functions, do not apply to real machines,
>  >for purely formal reasons.  Real machines do not, in the technical sense,
>  >compute functions.
>

Very interesting, Ken. I am misquoting you. Perhaps you "overlooked" the
sentence that should be replaced by the ". . ." preceding your quotation
above. Lest we forget:

Message-ID: <612@ntpdvp1.UUCP>

"But I think it is perfectly obvious that real computers are more than TM's.
Not because they are physical objects, but because they have a permanent
memory.  Each time a TM is given a particular input, the tape is erased..."

As I said in my first posting, which was primarily a response to the FIRST
sentence, I'm quite sure I misread you. However, what you said is

"I think it *PERFECTLY OBVIOUS* that *REAL* computers are *MORE THAN* TM's."

(emphasis mine)

Obviously, I have misinterpreted what you meant by "more than". That is,
you apparently feel that "more powerful than" and "more than" are so
essentially non-equivalent as to constitute a mis-quotation.  However,
your obvious omission of this sentence in the quote you now post seems
to me to indicate that you no longer wish to defend this statement...
Why can't you just admit that?

R.Kohout

ewe3_ss@uhura.cc.rochester.edu (Erdman West) (08/10/90)

In article <620@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
>I hope that you can now agree that this is both obvious and trivial.

Excuse me, but I really hate it when someone feels the need to be superior
and express to the world how awesome his mind is so much so that he has to
point out "obvious" and "trivial" things to other peons.  If they were
obvious and trivial then you wouldn't be pointing them out, so since they're
not, then don't say that.  It makes you sound smug and superior and just
injects a sense of a battle with the outcome determining some kind of
winner.  We are not here to show who's "the best, the brightest", but to
find excellent ideas and well constructed arguements and clear points.  I
post this to the net and not through e-mail to Ken only because this type of
attitude is displayed here all too often.

Scott West 
"A good idea doesn't care who has it, and neither should you."

kohout@cme.nist.gov (Robert Kohout) (08/10/90)

In article <620@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
>
>> Interesting, Ken, but what's the point? Do you intend to imply that this
>> relation represents something that computers "do" and Turing Machines don't?
>
>If you will recall the assertion that I set out to formalize, you will perhaps
>recognize that I have just made my point:  when there is something on the
>tape besides "input", then the "output" is not a function of the "input".
>I hope that you can now agree that this is both obvious and trivial.
>
Imagine I have a non-deterministic TM that begins its computation by
writing a random string of data on its tape. It then uses this data ,
along with its input to compute whatever its output will be. Since the
TM is non-deterministic, we can assume that this "random" number is,
in fact, whatever happens to be on the tape of your machine at the "start".
This TM will then compute whatever your machine computes. 

A TM that prints out occassional values is no more powerful than one
which prints out only one. A 2-tape TM is no more powerful than one with
1 tape. Why can't a TM keep a database of its history on, say, a second tape?
If you're trying to say that the "power" of a real computer comes from
its ability to accept arbitrary inputs, can't a non-deterministic TM just
"guess" these inputs up front and proceed from there?

>> }       1.  A computer is used repeatedly to compute different values.
>> }       2.  The number of times a computer is used is unlimited in
>> }           principle.
>> }       3.  The device may modify its future operation in the course
>> }           of performing the current task.  (by updating the database)
>> }       4.  The sequence of values input to the computer cannot be
>> }           specified in advance.
>> } 
>> None of these are beyond the capacity of a Turing Machine.
>
>It might help to avoid further confusion if you would take the trouble to
>explain this claim.
>
1. Perhaps technically a TM computes only one value, but it is possible
to 'output' several values of the course of a single 'computation'. Perhaps
the TM implements an operating system and technically maps zero to zero.
We don't have to care about the 'function' being computed, and it is
certainly within the capabilities of a TM to perform what may seem like
'several' computations in the course of only a single calculation.

2. Again, while the number of times a TM may be used is technically 1,
this does not prevent us from implementing a TM which, in the course of
its one computation actually performs infinitely many sub-computations.

3. In the context described in 1 and 2 , a TM can certainly keep a database. 
Moreover, since a non-deterministic TM can effectively "guess" its database
at start-up, it generally may not need one.

4. What the sequence of input value is, I do not know, and I do not care.
If you are saying that a real machine can accept its inputs in little
chunks, while a TM requires its input up front I maintain that this adds
nothing to the computing ability of the machine. Obviously, one could take
the entire input over the life of a real machine and encode it in some
fashion that could suffice to be the single, "initial" input of a TM.
Also, what is to prevent a non-deterministic TM from "guessing" user
input whenever it is required?

>> Do you mean to imply by this that a computer is more powerful than a TM?
>> Do you REALLY think you are refuting Church's Thesis?
>
>You must have been very excited to think that you found a "Pro-Searlite"
>(which I am not) who believes he can refute Church's Thesis (which I do
>not).  Sorry to disappoint you, but you have only your reading skills to
>blame.
>
>> . . . You
>> certainly haven't shown anything that computers "do" that TM's don't. For
>> that matter, you haven't made a single assertion about this |= relation
>> insofar as it applies to computers and not Turing Machines.
>
>No.  You have asserted without argument that my (1-4) are not "beyond the
>capacity of a Turing Machine."  That is pure bluff.  Or perhaps you are
>unfamiliar with the concept of "non-denumerable set".
>
I don't see how you can have it both ways. Either Church's thesis is incorrect,
or a RAM machine cannot do anything that a TM cannot do. Which is it?
Perhaps it IS my reading, but you seem to be saying that no, you do not
dispute Church's Thesis, and here is a list of things that a computer can
do and a TM cannot. I have once claimed that nothing in your list is
beyond the capacity of a TM, and now you ask for me to substantiate this
claim. Is this not questioning my statement? And in questioning my assertion,
are you not also implying that you believe that a RAM machine has certain
capabilities which a TM does not? And does that not violate Church's Thesis?

>
>Ken Presting   ("Punchcards on the table")

Bob Kohout

Victoria_Beth_Berdon@cup.portal.com (08/13/90)

scott west and robert kohout -- did you receive e-mail from me re:kenp?  I do
not know if my routing works.  thanks.  victoria

Victoria_Beth_Berdon@cup.portal.com (08/13/90)

scott west and robert kohout -- did either of you receive e-mail from me?  I
do not know if my routing works.  thanks.  victoria

kenp@ntpdvp1.UUCP (Ken Presting) (08/29/90)

In article <8876@ur-cc.UUCP>, ewe3_ss@uhura.cc.rochester.edu (Erdman West) writes:
> In article <620@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
> >I hope that you can now agree that this is both obvious and trivial.
> 
> Excuse me, but I really hate it when someone feels the need to be superior
> and express to the world how awesome his mind is so much so that he has to
> point out "obvious" and "trivial" things to other peons.  If they were

I have been away for two weeks, and was disappointed to find this article.

Scott, you may have missed the articles by Bob Kohout which prompted my 
condescending tone.  His postings included very specific assertions of
exactly the type you deplore - that those who disagree with his opinion
of John Searle are too incompetent to merit serious attention.

My response to his sarcastic insults was an honest effort in plain algebra,
and the remark to which you object is intended to *minimize* the significance
of my mathematical point.  

> obvious and trivial then you wouldn't be pointing them out, so since they're
> not, then don't say that.  It makes you sound smug and superior and just

This is a mistake.  To say that a point is "obvious" is to say that one
considers argument unnecessary.  To say that a point is "trivial" is to say
that one considers arguing about it to be unrewarding.  Often the formal
structure of a larger argument or explanation requires mentioning facts 
which are obvious and trivial.  To say that something is obvious and trivial
can help to avoid confusion, and focus attention on important points.


> injects a sense of a battle with the outcome determining some kind of
> winner.  We are not here to show who's "the best, the brightest", but to
> find excellent ideas and well constructed arguements and clear points.  I
> post this to the net and not through e-mail to Ken only because this type of
> attitude is displayed here all too often.

I second this idealistic attitude, but I must confess to having a finite
amount of patience.  By no means do all contributors to the Net take the
spirit of curiosity and exploration as their guide.  Some simply want to see
opposing views disappear, and are willing to use ridicule to that end.  

Usenet may be famous for the flame, but it is not alone in providing a forum
for ad hominem attacks.  Anyone who thinks in public (in print or at 
conferences) is likely to find some unpleasant adjectives attached to their
work.  

This fact has important consequences for cognitive science.  Thinking simply
does not occur in the absence of emotional responses.  Views are never
adopted purely on the grounds of logical argument.  That does not justify
illogic, of course.  It does imply, however, that to make a contribution to
a discussion conducted by human beings, one cannot ignore the emotions of
oneself and one's interlocutors.  It follows that pure inference engines will
never pass for human.

Kohout seems to have supposed that he could achieve his ends better by fighting
than by cooperating.  Since he has repeatedly insulted my intelligence in 
no uncertain terms, I decided, rather than letting the issue drop, to fight
back.  I am glad to see that Bob has taken the trouble to post an article 
with some formal objections.  Perhaps the tone of the discussion will now
change.

Perhaps we will both learn something. 

> Scott West
> "A good idea doesn't care who has it, and neither should you."

Ken Presting   ("You were expecting a saint?")