[net.philosophy] Sc--nce Attack

franka@mmintl.UUCP (Frank Adams) (01/01/70)

[Not food]

In article <541@spar.UUCP> ellis@max.UUCP (Michael Ellis) writes:
>    Excerpts from `Minds, Brains, Programs': [by John Searle]
>
>	The problem with a brain simulator is that it is simulating the
>	wrong things about the brain. As long as it only simulates the
>	formal structure of the sequence of neural firings at synapses, it
>	won't have simulated what matters about the brain, namely its causal
>	properties, its ability to produce intentional states.

This seems to me to be a rather large jump.  I would assert that such a
simulation simulates exactly what matters about the brain, namely its causal
properties, its ability to produce intentional states.  What are these things
that a simulation of the neural firings fails to capture them?

>	...
>	No one would suppose that we could produce milk and sugar by running
>	a computer simulation of the formal sequences in lactation and
>	photosythesis; but where the mind is concerned, many people are
>	willing to believe in such a miracle, because of ... a deep and
>	abiding dualism: the mind they suppose is a matter of specific
>	material causes in a way that milk and sugar are not.

But no one would deny that we could produce [a performance of] a piece of
music by having a computer generate the vibrations in the air which exactly
match those of a real performance.  It is not necessary to use the actual
instruments of the original performance.

The only dualism I believe in [in this context] is that between physical
substance and information.  The essence of milk and sugar is their substance;
the essence of a mind or a piece of music is its structure.

>    Nope. But science forges ever onward, philosophical and psychological
>    speculation leading the way.

... or following behind.

Frank Adams                           ihpn4!philabs!pwa-b!mmintl!franka
Multimate International    52 Oakland Ave North    E. Hartford, CT 06108

ellis@spar.UUCP (Michael Ellis) (08/30/85)

>>>[Rosen] >>[Gates] >[Rosen]

>>>Science stinks!  Anyone who thinks science holds... Who are they to
>>>shatter the foundations of my beliefs?
>> I really don't see the purpose for this kind of retaliation,
>> though.  Yes, what you write here sounds ridiculous, if it were true.
>Retaliation?  Against what?  Indeed, it is very true:  this is typical
>of the tone of more than a few articles on the topic of science.

    Nobody here (including me) has ever said disparaging things about 
    science. On the other hand, many, including myself, refuse to
    accept your repeated assertions (that Sc--nce can ultimately
    know All That Is).

    In other words, it is not science, but rather your blind dogmatic
    assertions of faith unsupported by evidence or reason, that provoke the
    distasteful "tone" you refer to.

>> However, all I've seen from you is a statement of your beliefs,
>> followed by a statement of allegiance to science, followed by tirades if
>> anyone dares to differ.
>
>Tirades?  THAT was a "tirade"?  Hmmm.  Satire is but one of many tools
>of persuasion.  Sometimes it is the only way to reach some people.  The
>fact remains that the assertions made by others about science being to
>"blame" for a long list of things harken right back to the list I offered.  
>Their lists were no less funny.

    "Tools of persuasion"?

    Why not try logic first? 

    You have NEVER presented a single rigorous argument or hard evidence
    that the Sc--ntific Method is the ultimate determiner of truth in all
    areas of human interest (eg. music, love...).

>> But to address what you've said above:as I've already said, IF science were
>> able to prove beyond doubt that certains things that I hold "near and dear"
>> were false, I would accept that.  However, science is not (yet?) able to do
>> that with things like love, beauty, art, etc.  Thus, the point I made 
>> earlier is still valid:  analysis at the current time is inconclusive 
>> using current scientific techniques.
>
>But science is not "out" to prove such things "false" or "non-existent".
>Clearly they exist as constructs of the human mind in categorizing such
>things. True objective inquiry could, given the right tools and the
>opportunity, figure out what sorts of things trigger human responses
>like "love" and "beauty", in general and in specific.  
>I hardly think that would "destroy" such things, it would merely "take all
>the mystery out of life".

     Totally bogus -- mystery IS beauty (to some of us, anyway..)

     For me, beauty has both surprise/spontaneity and rhythm/predictability,
     in abundance, and even qualities that I do not like.

     As examples, prime numbers, punk rock, the sea, or frogs.
     
     If the mysteries of prime numbers were uncovered, I'd simply find 
     more interesting numbers. If a machine could predict my taste, then I'd
     make a point of randomly liking against its predictions. If the sea
     were tamed, I'd look at the stars instead. And frogs -- they will
     always be totally beyond human understanding (a belief, I know).

>However, other similar notions like free will appear to be illusions
>of a self-monitoring mind that thinks "I am freely wanting to do this".

     But you still believe your definition of free will is the `real'
     definition (as if there were exactly one) -- regardless of the
     overwhelming rejection your definition has received.

     As a person who clearly has no free will, you might as well be a blind
     person telling us that vision is impossible...

>Science doesn't give us "good" or "bad" things.
>Science gives us facts.  Do you have any idea why alchemy never gave us
>"bad" things?  Because it didn't provide anything worth using for good OR
>evil!!  (That's an oversimplification, we did get things like symbology
>from alchemy, but the actual "chemical" learning of alchemy didn't work.)

    Alchemy also gave us Chemistry.

    You might try reading Feyerabend's _Against_Method_. He argues
    that we are all on the verge of becoming blind religious fanatics,
    (exactly like you, Rich!!) and have been so brainwashed by our
    institutionalized worship of the omnipotent Sc--ntific Method that
    we fail to perceive value in the diverse sources from which our
    real inspirations spring. 

>>>"to be nobody but yourself in a world which is doing its best night and day
>>> to make you like everybody else means to fight the hardest battle any human
>>> being can fight and never stop fighting."  - e. e. cummings
>
>> Oh, you poor, poor, shackled martyr, you!  Poor baby is fighting so hard
>> against such overwhelming persecutions!  Give me a break, Rich.
>
>"Poor shackled martyr"?  Where the fuck do you get off?  "Poor baby"?  I'd
>venture a guess that you are the poor baby, with your need to ridicule even
>my choice of signature quotes in an effort to support your position.  What
>IS your point?  Do you have some problem with the quote?  I find it quite
>relevant in everyday life.  If you don't like it, tough shit.  It has
>nothing to do with the argument at hand, but I suppose you'll attack anything
>to get what you want out of this argument.

    Funny you should complain, Rich, considering how many times you've
    flamed at my SMASH CAUSALITY!!! signoff...

>> What really gets me, though, are your continued statements that anyone who
>> "limits" science is really, deep-down, scared of having his "nears and
>> dears" torn apart and shown to be false.  And yet you respond to my (?)
>> last article with the tirade above?  WHO'S REALLY SCARED, RICH?  Aren't
>> you just as doggedly defending your own side?
>
>In an age in which thinking things through is out of fashion, where people
>are being taught to use the "right side of the brain" without having
>mastered the use of the left, and where religious autocrats would squelch
>the teaching of scientific inquiry and logic as a means of thinking and
>reaching conclusions, you bet I'm scared.  Scared that wishy-washy-ful
>thinkers will shred human learning and bring us back to the dark ages of
>willy nilly superstition.  But of the two of us, who is REALLY scared?

    Rich, you might spend more time learning how to use any part of your mind
    whatsoever.

    You confuse understanding one's full human nature (not just rationality
    but also subjective intuitive things) with rigid authoritarian
    religion (which denies all mental functioning whatsoever).  Funny that
    you seem to possess precisely those same unthinking illogical qualities
    you fear most while you lack those which you claim to uphold.
    
    Religion (including your inflexible Sc--ntism) without awareness is
    ready to send this planet a lot farther back than the middle ages --
    like maybe the Paleozoic!
   
    I believe the real evil of religion occurs when mechanical and dogmatic
    assertions of faith become so compulsive that communication with those
    of different viewpoints becomes impossible.

>And who really needs to be?  You speak of recognizing science's limitations.
>Does that mean you simply don't use the scientific method of analysis when
>it comes to things you perceive to be beyond those limits, in order to
>say "thus my ideas are true"?  That's what the wishful thinkers are doing.
>What isn't this method suited for, and why?  In what cases do you simply
>discard it, and in favor of what?  Do tell us.

    Case in point: Turing's test for machine consciousness.

    How can we determine if an entity actually has conscious awareness?

    The only way to REALLY know is to ACTUALLY BE the entity in question.
    
    There is no such thing as `objective scientific evidence' for awareness.

    Consequently, if verifiable OBJECTIVE evidence is the only valid
    determiner for existence -- I (my conscious awareness) do not exist.

-am I existent yet?

rlr@pyuxd.UUCP (Rich Rosen) (09/03/85)

>     Nobody here (including me) has ever said disparaging things about 
>     science. On the other hand, many, including myself, refuse to
>     accept your repeated assertions (that Sc--nce can ultimately
>     know All That Is). [ELLIS]

What I assume that you mean is YOU repeated assertions that I am asserting
this.

>     In other words, it is not science, but rather your blind dogmatic
>     assertions of faith unsupported by evidence or reason, that provoke the
>     distasteful "tone" you refer to.

In other words, since I haven't uttered any such assertions, you have no
reason for being distasteful.  I *have* uttered things about the way some
people jump up and down and say "there are limits to science, thus you
should accept my arbitrary conception of what's going on".

>     "Tools of persuasion"?
> 
>     Why not try logic first? 

I did.  It didn't work.

>     You have NEVER presented a single rigorous argument or hard evidence
>     that the Sc--ntific Method is the ultimate determiner of truth in all
>     areas of human interest (eg. music, love...).

"Areas of human interest"?  Oh, please!  The fact remains that when it comes
to truth, hard truth about the real world, it is a better determiner than
the wishful thinking you would have us engage in.

>>But science is not "out" to prove such things "false" or "non-existent".
>>Clearly they exist as constructs of the human mind in categorizing such
>>things. True objective inquiry could, given the right tools and the
>>opportunity, figure out what sorts of things trigger human responses
>>like "love" and "beauty", in general and in specific.  
>>I hardly think that would "destroy" such things, it would merely "take all
>>the mystery out of life".

>      Totally bogus -- mystery IS beauty (to some of us, anyway..)

Great!  Then what you're saying is that not knowing about what something
is really like increases its beauty to you.  Imagine that---a scientific
analysis of what makes things beautiful!  Now, I'm not even thinking of
saying that that is the truth about beauty (we have hardly done any
study of it to show that this rule applies in all cases), but apparently
it applies in your case.

>      For me, beauty has both surprise/spontaneity and rhythm/predictability,
>      in abundance, and even qualities that I do not like.

More scientific analysis of beauty (so to speak).  Great!
     
>      If the mysteries of prime numbers were uncovered, I'd simply find 
>      more interesting numbers. If a machine could predict my taste, then I'd
>      make a point of randomly liking against its predictions. If the sea
>      were tamed, I'd look at the stars instead. And frogs -- they will
>      always be totally beyond human understanding (a belief, I know).

No, obviously a fact.  So what you have offered us is at least a potential
for a rigorous analysis of at least one factor in beauty, something you claimed
was unknowable.  But, in fact, mysteriousness to you makes things (potentially)
more beautiful.  Beautiful!!!  You have surprised me whilst offering a
certain formic cadence to your beliefs that is quite beautiful.  No
incompatibility there.

>      But you still believe your definition of free will is the `real'
>      definition (as if there were exactly one) -- regardless of the
>      overwhelming rejection your definition has received.

Imagine that.  Not cottoning to a coup d'etat of the English language who
want to change word meaning around while everyone else sleeps so that they
can "get" what they "want".  How awful of me!

>      As a person who clearly has no free will, you might as well be a blind
>      person telling us that vision is impossible...

I might as well, I guess, since I'm blind to the intended meaning of this
"analogy"...  How can you compare something that does exist with something
that does?  (Assuming your conclusion?)

>>Science doesn't give us "good" or "bad" things.
>>Science gives us facts.  Do you have any idea why alchemy never gave us
>>"bad" things?  Because it didn't provide anything worth using for good OR
>>evil!!  (That's an oversimplification, we did get things like symbology
>>from alchemy, but the actual "chemical" learning of alchemy didn't work.)

>     Alchemy also gave us Chemistry.

Eventually, through efforts of those who turned it around (sacrilegiously)
into something viable.  But what relevance does that have to my point?

>     You might try reading Feyerabend's _Against_Method_. He argues
>     that we are all on the verge of becoming blind religious fanatics,
>     (exactly like you, Rich!!) and have been so brainwashed by our
>     institutionalized worship of the omnipotent Sc--ntific Method that
>     we fail to perceive value in the diverse sources from which our
>     real inspirations spring. 

Really?  Yes, we may "place to much emphasis" on method, but real creative
people still get real creative inspirations, from knowing how to use their
minds and position them in the most productive way.  And how doe they know
this?  From learning about the method of how they think.  My fear is that
we are all on the verge of becoming wishywashyful thinkers (exactly like
you, Mike) out of fear of what the real world has been learned to be like,
and how that doesn't fit with what we "want".

>>>>"to be nobody but yourself in a world which is doing its best night and day
>>>> to make you like everybody else means to fight the hardest battle any human
>>>> being can fight and never stop fighting."  - e. e. cummings

>>> Oh, you poor, poor, shackled martyr, you!  Poor baby is fighting so hard
>>> against such overwhelming persecutions!  Give me a break, Rich.

>>"Poor shackled martyr"?  Where the fuck do you get off?  "Poor baby"?  I'd
>>venture a guess that you are the poor baby, with your need to ridicule even
>>my choice of signature quotes in an effort to support your position.  What
>>IS your point?  Do you have some problem with the quote?  I find it quite
>>relevant in everyday life.  If you don't like it, tough shit.  It has
>>nothing to do with the argument at hand, but I suppose you'll attack anything
>>to get what you want out of this argument.

>     Funny you should complain, Rich, considering how many times you've
>     flamed at my SMASH CAUSALITY!!! signoff...

"Smash this!" or "What are you smashing today?" is very different from tripe
like "poor shackled martyr".  Or is that just me being subjective?  Or you?

>>>What really gets me, though, are your continued statements that anyone who
>>>"limits" science is really, deep-down, scared of having his "nears and
>>>dears" torn apart and shown to be false.  And yet you respond to my (?)
>>>last article with the tirade above?  WHO'S REALLY SCARED, RICH?  Aren't
>>>you just as doggedly defending your own side?

>>In an age in which thinking things through is out of fashion, where people
>>are being taught to use the "right side of the brain" without having
>>mastered the use of the left, and where religious autocrats would squelch
>>the teaching of scientific inquiry and logic as a means of thinking and
>>reaching conclusions, you bet I'm scared.  Scared that wishy-washy-ful
>>thinkers will shred human learning and bring us back to the dark ages of
>>willy nilly superstition.  But of the two of us, who is REALLY scared?

>     Rich, you might spend more time learning how to use any part of your mind
>     whatsoever.

And to think two weeks ago you apologized for this sort of vindictive shit.
Yes, Michael, do smash something, please.

>     You confuse understanding one's full human nature (not just rationality
>     but also subjective intuitive things) with rigid authoritarian
>     religion (which denies all mental functioning whatsoever).  Funny that
>     you seem to possess precisely those same unthinking illogical qualities
>     you fear most while you lack those which you claim to uphold.
    
Confuse?  Or recognize the same roots within each?

>     Religion (including your inflexible Sc--ntism) without awareness is
>     ready to send this planet a lot farther back than the middle ages --
>     like maybe the Paleozoic!
   
"Inflexible"?  In that it demands basing conclusions on rigorous analysis
rather than "explaining things using models that make the way seem the way
you want it to be"?  How terrible!!

>     I believe the real evil of religion occurs when mechanical and dogmatic
>     assertions of faith become so compulsive that communication with those
>     of different viewpoints becomes impossible.

Is it a dogmatic assertion of faith to demand that those who would wishfully
thinkg with the worst of them adhere to a serious and proven method of
analysis and reasoning rather than their well known way of working backwards
from the conclusion?

>>And who really needs to be?  You speak of recognizing science's limitations.
>>Does that mean you simply don't use the scientific method of analysis when
>>it comes to things you perceive to be beyond those limits, in order to
>>say "thus my ideas are true"?  That's what the wishful thinkers are doing.
>>What isn't this method suited for, and why?  In what cases do you simply
>>discard it, and in favor of what?  Do tell us.

>     Case in point: Turing's test for machine consciousness.
>     How can we determine if an entity actually has conscious awareness?
>     The only way to REALLY know is to ACTUALLY BE the entity in question.
>     There is no such thing as `objective scientific evidence' for awareness.
>     Consequently, if verifiable OBJECTIVE evidence is the only valid
>     determiner for existence -- I (my conscious awareness) do not exist.

If we can determine what it is withint the organization of our brains that
gives us conscious awareness (who are you to say that this is impossible?
isn't that dogmatic???) then we could see if that existed in the machine,
could we not?  Or would that make it "less beautiful"?
-- 
"iY AHORA, INFORMACION INTERESANTE ACERCA DE... LA LLAMA!"
	Rich Rosen    ihnp4!pyuxd!rlr

barry@ames.UUCP (Kenn Barry) (09/07/85)

>>>You speak of recognizing science's limitations.
>>>Does that mean you simply don't use the scientific method of analysis when
>>>it comes to things you perceive to be beyond those limits, in order to
>>>say "thus my ideas are true"?  That's what the wishful thinkers are doing.
>>>What isn't this method suited for, and why?  In what cases do you simply
>>>discard it, and in favor of what?  Do tell us.
>
>>     Case in point: Turing's test for machine consciousness.
>>     How can we determine if an entity actually has conscious awareness?
>>     The only way to REALLY know is to ACTUALLY BE the entity in question.
>>     There is no such thing as `objective scientific evidence' for awareness.
>>     Consequently, if verifiable OBJECTIVE evidence is the only valid
>>     determiner for existence -- I (my conscious awareness) do not exist.
>
>If we can determine what it is withint the organization of our brains that
>gives us conscious awareness (who are you to say that this is impossible?
>isn't that dogmatic???) then we could see if that existed in the machine,
>could we not?  Or would that make it "less beautiful"? [ROSEN]

	I think you missed the catch in the question. In order to equate
conscious awareness to any physical mechanism(s), you'd have to be able
to distinguish between actual self awareness and a perfect counterfeit
of it. Suppose I build a computer which can act *exactly* as though it
is self-aware. Suppose, further, that I give you complete access to the
machine's internals, and complete documentation, plus a staff of experts
to help you understand its operation. You would soon understand quite
well how the machine managed to act as if it were self-aware, and might
even jump to the conclusion that the machine was *not* self-aware, since
you could follow the chain of events from stimulus to response in perfect
detail, and no procedure called "self-awareness" was in the chain anywhere.
	All that would show, though, is that the machine is mechanistic
in its operation, and possesses no "free will". This would still leave
you in ignorance of whether the machine actually had awareness, or only
simulated it perfectly. As you, yourself, have argued elsewhere, human
beings are also without free will, and mechanistic in their operation,
yet they manage to be self-aware even so.
	This is the big catch. As long as "self-awareness" isn't a link
in any causal chains, then self-awareness *by* *definition* produces
no measurable effects, and can't be detected scientifically. For all
I really *know*, I am the only self-aware creature in the universe; all
you other folks are just mindless automatons.
	So, how could you *ever* determine, scientifically, whether my
machine was truly self-aware, or only simulated self-awareness?

-  From the Crow's Nest  -                      Kenn Barry
                                                NASA-Ames Research Center
                                                Moffett Field, CA
-------------------------------------------------------------------------------
 	USENET:		 {ihnp4,vortex,dual,nsc,hao,hplabs}!ames!barry

rlr@pyuxd.UUCP (Rich Rosen) (09/11/85)

> 	I think you missed the catch in the question. In order to equate
> conscious awareness to any physical mechanism(s), you'd have to be able
> to distinguish between actual self awareness and a perfect counterfeit
> of it. Suppose I build a computer which can act *exactly* as though it
> is self-aware. Suppose, further, that I give you complete access to the
> machine's internals, and complete documentation, plus a staff of experts
> to help you understand its operation. You would soon understand quite
> well how the machine managed to act as if it were self-aware, and might
> even jump to the conclusion that the machine was *not* self-aware, since
> you could follow the chain of events from stimulus to response in perfect
> detail, and no procedure called "self-awareness" was in the chain anywhere.
> [KENN BARRY]

Why would I do that?  Can you define the way in which this is a "counterfeit"
and not the "real thing".

> 	All that would show, though, is that the machine is mechanistic
> in its operation, and possesses no "free will". This would still leave
> you in ignorance of whether the machine actually had awareness, or only
> simulated it perfectly. As you, yourself, have argued elsewhere, human
> beings are also without free will, and mechanistic in their operation,
> yet they manage to be self-aware even so.

Or maybe we are just "simulating it perfectly" ourselves.  Can you define the
difference?

> 	This is the big catch. As long as "self-awareness" isn't a link
> in any causal chains, then self-awareness *by* *definition* produces
> no measurable effects, and can't be detected scientifically. For all
> I really *know*, I am the only self-aware creature in the universe; all
> you other folks are just mindless automatons.

How did you know?

> 	So, how could you *ever* determine, scientifically, whether my
> machine was truly self-aware, or only simulated self-awareness?

Again, aren't you just creating a bogus differential between "simulated"
selfawareness and "real" selfawareness?  What is the difference?
-- 
"iY AHORA, INFORMACION INTERESANTE ACERCA DE... LA LLAMA!"
	Rich Rosen    ihnp4!pyuxd!rlr

ellis@spar.UUCP (Michael Ellis) (09/11/85)

>>>  Case in point: Turing's test for machine consciousness.
>>>  How can we determine if an entity actually has conscious awareness?
>>>  The only way to REALLY know is to ACTUALLY BE the entity in question...

>>If we can determine what it is withint the organization of our brains that
>>gives us conscious awareness (who are you to say that this is impossible?
>>isn't that dogmatic???) then we could see if that existed in the machine,
>>could we not?  Or would that make it "less beautiful"? [Rich]

>	I think you missed the catch in the question. In order to equate
>conscious awareness to any physical mechanism(s), you'd have to be able
>to distinguish between actual self awareness and a perfect counterfeit
>of it...
>	This is the big catch. As long as "self-awareness" isn't a link
>in any causal chains, then self-awareness *by* *definition* produces
>no measurable effects, and can't be detected scientifically. For all
>I really *know*, I am the only self-aware creature in the universe; all
>you other folks are just mindless automatons...
>	So, how could you *ever* determine, scientifically, whether my
>machine was truly self-aware, or only simulated self-awareness? [Kenn Barry]

I'd like to add more to Kenn's excellent arguments..

    Imagine futuristic experimenters somehow temporarily disabling various
    brain functions and using some standard criterion to test for awareness.
    But what objective criterion is there? Do we ask the subject if they are
    aware?  Or do we ask afterwards whether they recall events that
    transpired during the experiment? 
    
    Basically, how can we come up with a criterion to be used in the
    experiment that we know will not be vulnerable to the cases below:

        The subject has lost subjective awareness, but retains
        speech/memory/sensation/.. (or whatever the criterion may be)

        The subject still subjectively experiences awareness, despite
        loss of speech/memory/sensation/..

    I see no way around this problem, which is, incidentally, similar to the
    question "What kind of scientific evidence could there be for God ?"

    Regardless, let us suppose that our futuristic experimenters discover a
    definite mental structure that is responsible for the human subjective
    experience of awareness. Maybe it would even be implementable on digital
    computers, maybe not. (BTW, Searle has strong arguments to the contrary..)
    
    There is still no way whatsoever to demonstrate that some other
    radically different kind of structure might not also work as well --
    after all, the only aware entities we are certain of are humans --
    although I believe animals and maybe plants share this trait with us, to
    varying degrees. 
    
    How can we say for certain that an arbitrarily complex heap of chemicals
    is not subjectively experiencing conscious awareness (perhaps even
    qualitatively similar to our own) even though the result of structures
    profoundly different from anything found in ourselves?

    Awareness is only knowable by BEING the entity in question.
    
    This issue of kinds of knowledge -- objective vs subjective -- is where
    science SETS ITS OWN LIMITS of observation, and it is where I find
    strictly materialistic philosophical schemes inadequate for
    understanding my own human existence.

    Please note -- that I am not claiming that materialistic thinking
    is invalid; on the contrary, science would probably never have made any
    advances whatsoever unless subjectivism were discarded in the laboratory.
    Clearly research into the functioning of the brain will never go anywhere
    without the assumption that all explanations must be objective -- as
    rational relationships among physical entities.

    Philosophy, the love of wisdom, must never acquiesce to the axioms and
    definitions of any single viewpoint -- materialistic or otherwise.
    Instead, philosophy should be a tool for mutual comprehension of all
    modes of thought.

-michael

rlr@pyuxd.UUCP (Rich Rosen) (09/13/85)

>     Imagine futuristic experimenters somehow temporarily disabling various
>     brain functions and using some standard criterion to test for awareness.
>     But what objective criterion is there? Do we ask the subject if they are
>     aware?  Or do we ask afterwards whether they recall events that
>     transpired during the experiment? [ELLIS]

If (and only if) consciousness involves some non-physical component, then
this would be the only way.  But this is like saying the only way we can
find out what is wrong with a buggy program is to get it to "tell" us what
is wrong with it.  Ever read a dump, Mike?  It's not my idea of fun, but it
can be done, and is, more often than I might like.  For your claim to have
any veracity, you must assume the point that a "dump" of the brain, coupled
with proper knowledge of how to "read" it, would not generate the same type
of information that reading an operating system dump might give us.  And I
still contend that you make this assumption to reach the conclusion you want
regarding brains and minds.  Which is an ass backwards way to think.

>     Regardless, let us suppose that our futuristic experimenters discover a
>     definite mental structure that is responsible for the human subjective
>     experience of awareness. Maybe it would even be implementable on digital
>     computers, maybe not. (BTW, Searle has strong arguments to the contrary..)
    
Care to elaborate on them rather than just mentioning them in passing as if
that alone lends some sort of legitimacy to it?  It would be appreciated.

>     There is still no way whatsoever to demonstrate that some other
>     radically different kind of structure might not also work as well --
>     after all, the only aware entities we are certain of are humans --
>     although I believe animals and maybe plants share this trait with us, to
>     varying degrees. 
    
There is also no reason to assume that the reason my parakeet is missing and
there are feathers in my cat's mouth is that nanoscopic aliens from the
23rd dimension entered my house, neutralized my cat with a time displacement
transfuser and digital synthesizer, disintegrated the bird with a Radio Shack
combination nuclear tambourine/microwave transmiitter/CD player, and stuffed
real bird feathers in my cat's mouth.  If I have a vested interest in "proving"
that aliens from the 23rd dimension shop at Radio Shack, I might choose to
make this assumption.

>     How can we say for certain that an arbitrarily complex heap of chemicals
>     is not subjectively experiencing conscious awareness (perhaps even
>     qualitatively similar to our own) even though the result of structures
>     profoundly different from anything found in ourselves?

We can't.  Perhaps the rock you tripped over last week was hurt as much as you
were.  Can you describe the mechanism by which it senses and feels all this?

>     This issue of kinds of knowledge -- objective vs subjective -- is where
>     science SETS ITS OWN LIMITS of observation, and it is where I find
>     strictly materialistic philosophical schemes inadequate for
>     understanding my own human existence.

Is the limit "objective vs. subjective", or is it "documentable and verifiable
vs. non- ..."?  If your hypothetical experimenters had enough knowledge to go
into your brain and determine the ACCURACY of your subjective ideas, and found
them based not on facts but on preconceptions, what would you say then?

>     Please note -- that I am not claiming that materialistic thinking
>     is invalid; on the contrary, science would probably never have made any
>     advances whatsoever unless subjectivism were discarded in the laboratory.
>     Clearly research into the functioning of the brain will never go anywhere
>     without the assumption that all explanations must be objective -- as
>     rational relationships among physical entities.

BRA-VO!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

>     Philosophy, the love of wisdom, must never acquiesce to the axioms and
>     definitions of any single viewpoint -- materialistic or otherwise.
>     Instead, philosophy should be a tool for mutual comprehension of all
>     modes of thought.

Does this mean philosophy should ignore the truth, the realities of the world,
in favor of propositions about how some might prefer to see the world?
-- 
Life is complex.  It has real and imaginary parts.
					Rich Rosen  ihnp4!pyuxd!rlr

ellis@spar.UUCP (Michael Ellis) (09/13/85)

>> 	I think you missed the catch in the question. In order to equate
>> conscious awareness to any physical mechanism(s), you'd have to be able
>> to distinguish between actual self awareness and a perfect counterfeit
>> of it. Suppose I build a computer which can act *exactly* as though it
>> is self-aware. Suppose, further, that I give you complete access to the
>> machine's internals, and complete documentation, plus a staff of experts
>> to help you understand its operation. You would soon understand quite
>> well how the machine managed to act as if it were self-aware, and might
>> even jump to the conclusion that the machine was *not* self-aware, since
>> you could follow the chain of events from stimulus to response in perfect
>> detail, and no procedure called "self-awareness" was in the chain anywhere.
>> [Kenn Barry]
>
>Why would I do that? Can you define the way in which this is a "counterfeit"
>and not the "real thing". [Rich Rosen]

    Kenn made no suppositions whatsoever about the machine's awareness.
    Rather, he is asking "How can we tell if a machine that perfectly
    simulates human awareness actually possesses awareness?"

>>...
>Or maybe we are just "simulating it perfectly" ourselves.  Can you define the
>difference?
----
>> For all I really *know*, I am the only self-aware creature in the universe;
>> all you other folks are just mindless automatons.
>How did you know?

    Self-awareness is direct experience of self, just as pain is direct
    experience of bodily or psychic harm. 

    Self-aware entities have feelings, incorrigible knowledge of one's own
    private internal world. Such knowledge differs from objective knowledge
    in several ways -- it is directly experienced and totally certain.  In
    the same way that imaginary pain is, nonetheless, pain, so imaginary
    awareness is still awareness. 

    I would gladly `torture' a car, for example, by running the engine far
    beyond its proper operating level until broke down, if that were somehow
    to my benefit -- say, if somebody paid me.  But to torture a cat would
    result in intense guilt and nightmares for the rest of my life --
    because I believe that cats are aware beings and `really' feel pain,
    unlike cars. 
    
    Anthropomorphism, you say? No doubt about it. 

    Subjective delusion or not, the question has been put to you, Rich:

	Can you describe what objective evidence there might be for
	conscious awareness?

    I am convinced that you cannot.

>> 	So, how could you *ever* determine, scientifically, whether my
>> machine was truly self-aware, or only simulated self-awareness?
>Again, aren't you just creating a bogus differential between "simulated"
>self awareness and "real" selfawareness?  What is the difference?

    An entity with real self-awareness knows of its own existence.

    An entity with simulated self-awareness does not know of its own
    existence; however, the aware beings around it think it is self-aware.

-michael

rlr@pyuxd.UUCP (Rich Rosen) (09/14/85)

>>> 	I think you missed the catch in the question. In order to equate
>>> conscious awareness to any physical mechanism(s), you'd have to be able
>>> to distinguish between actual self awareness and a perfect counterfeit
>>> of it. Suppose I build a computer which can act *exactly* as though it
>>> is self-aware. Suppose, further, that I give you complete access to the
>>> machine's internals, and complete documentation, plus a staff of experts
>>> to help you understand its operation. You would soon understand quite
>>> well how the machine managed to act as if it were self-aware, and might
>>> even jump to the conclusion that the machine was *not* self-aware, since
>>> you could follow the chain of events from stimulus to response in perfect
>>> detail, and no procedure called "self-awareness" was in the chain anywhere.
>>> [Kenn Barry]

>>Why would I do that? Can you define the way in which this is a "counterfeit"
>>and not the "real thing". [Rich Rosen]

>     Kenn made no suppositions whatsoever about the machine's awareness.
>     Rather, he is asking "How can we tell if a machine that perfectly
>     simulates human awareness actually possesses awareness?"

Define a "simulation" of awareness.

>>Or maybe we are just "simulating it perfectly" ourselves.  Can you define the
>>difference?

>     Self-awareness is direct experience of self, just as pain is direct
>     experience of bodily or psychic harm. 
>
>     Self-aware entities have feelings, incorrigible knowledge of one's own
>     private internal world. Such knowledge differs from objective knowledge
>     in several ways -- it is directly experienced and totally certain.  In
>     the same way that imaginary pain is, nonetheless, pain, so imaginary
>     awareness is still awareness. 

Ever hear of phantom pains?  Where an amputee feels pain in a leg that isn't
there?  Totally certain??  As with Laura, you cannot just put the word
"knowledge" after subjective because you feel like it.

>     I would gladly `torture' a car, for example, by running the engine far
>     beyond its proper operating level until broke down, if that were somehow
>     to my benefit -- say, if somebody paid me.  But to torture a cat would
>     result in intense guilt and nightmares for the rest of my life --
>     because I believe that cats are aware beings and `really' feel pain,
>     unlike cars. 
    
You can also observe the way cats react, as though aware of what is going
on around them.  Oh, didn't you know that a cat is a perfect simulation of
self-awareness.  Now, about your maltreating a car:  that is truly despicable,
Michael.  Don't you know that cars are self-aware!  What's that, they have no
self monitoring mechanism to react to and "feel" what is happening to them?
Sorry, I guess you were right about the car.

>     Subjective delusion or not, the question has been put to you, Rich:
> 	Can you describe what objective evidence there might be for
> 	conscious awareness?

The way in which living beings of sufficient complexity react to surrounding
environments and experiences, indicates that they modify their choices and
actions in a sufficiently complex way as to have the complex self-monitoring
analytical systems that you call conscious awareness.

>     I am convinced that you cannot.

But I just did.  I'm sure you will find problems with my attempt, because
I admit it is not complete.  But it represents a significant jumping off point.

>>> 	So, how could you *ever* determine, scientifically, whether my
>>> machine was truly self-aware, or only simulated self-awareness?

>>Again, aren't you just creating a bogus differential between "simulated"
>>self awareness and "real" selfawareness?  What is the difference?

>     An entity with real self-awareness knows of its own existence.

Great.  Does that mean operating system interactive monitor/control systems
that know of their own existence in a running machine are self-aware?

>     An entity with simulated self-awareness does not know of its own
>     existence; however, the aware beings around it think it is self-aware.

Think about what a simulation is, Michael.  Within the reference frame of
the simulation system in question, it is "real".  What is the reference frame
of such a simulation of selfawareness?
-- 
Anything's possible, but only a few things actually happen.
					Rich Rosen    pyuxd!rlr

ellis@spar.UUCP (Michael Ellis) (09/25/85)

>>     Imagine futuristic experimenters somehow temporarily disabling various
>>     brain functions and using some standard criterion to test for
>>     awareness.  But what objective criterion is there? Do we ask the
>>     subject if they are aware?  Or do we ask afterwards whether they recall
>>     events that transpired during the experiment? [ELLIS]
>
>If (and only if) consciousness involves some non-physical component, then
>this would be the only way.

    On the contrary -- only if awareness HAS physical manifestation will
    the approach I've mentioned worked.
    
    To my knowledge, the successes to date in mapping brain structures to
    behavior functions have taken precisely this approach. 

    For example, we know which part of the brain causes language ability,
    because those who suffer the objectively perceivable speech dysfunctions
    typically also display some abnormality in a particular brain structure.

>But this is like saying the only way we can find out what is wrong with a
>buggy program is to get it to "tell" us what is wrong with it.  Ever read a
>dump, Mike?  It's not my idea of fun, but it can be done, and is, more often
>than I might like.  For your claim to have any veracity, you must assume the
>point that a "dump" of the brain, coupled with proper knowledge of how to
>"read" it, would not generate the same type of information that reading an
>operating system dump might give us.  

    As an assembly language hacker in the old days, I've read plenty of
    dumps -- especially those of malfunctioning programs with no source.
    My favorite way of fixing them was to fiddle with the suspected
    locations and observe the change in the outward behavior.

    Kenn Barry already addressed this issue -- even WITH a full description
    of a brain mechanism, all we can do is determine which structure
    corresponds to any given externally perceivable behavior pattern.

    This is the force behind the question "What is the external symptom of
    awareness?" If a rigorous definition can be defined that captures this
    internal experience, then we can eventually search thru `core dumps' (or
    tamper with brains) and locate the desired structure.

    On the issue of core dumps: To the extent that high level mental states
    are implemented at the quantum level, they will be in principle
    undumpable (objectively unknowable) provided that quantum indeterminacy
    remains a permanent feature of natural order -- nature may have been so
    economical in her methods that we feel sensations by destroying their
    quantum state information.
    
    In this case, taking core dumps would qualitatively randomize the
    subject's mental state -- we could only get core dumps of a person who
    was suffering the trauma of core-dump, quite a different thing from a
    candid snapshot of a person caught within the state which we really wish
    to study.

>And I still contend that you make this assumption to reach the conclusion
>you want regarding brains and minds.  Which is an ass backwards way to
>think.

    Oh, I see -- top-down conceptualization is inferior to bottom-up --
    that's what you seem to be saying. Very few people, (besides Skinnerians)
    would agree with you that objective dogmatically precedes subjective. 
     
    Especially with something so objectively elusive as awareness, the state
    of being a conscious entity. If we had no subjective mental states, I
    can hardly see our motivation in searching for their objective
    manifestations!

    Are we only external behavior? Why are we not also internal sensation?
    Why do you deny what you experience? Skinnerism may be a useful
    laboratory methodology, but is it the complete truth?

>>  Regardless, let us suppose that our futuristic experimenters discover a
>>  definite mental structure that is responsible for the human subjective
>>  experience of awareness. Maybe it would even be implementable on digital
>>  computers, maybe not. (BTW, Searle has strong arguments to the contrary..)
    
>Care to elaborate on them rather than just mentioning them in passing as if
>that alone lends some sort of legitimacy to it?  It would be appreciated.

    I'll do my best (anybody -- please flame at my misconceptions)..

    John Searle, who argues that human cognition (in particular
    intentionality and meaning) is caused by the powerful biochemical
    machinery of our brains, is the author of wonderfully cynical plain
    language attacks against the assertions by big name AI folks that
    human cognitive states are digitally simulatable.

    In particular, he attacks the notion (also expressed by Hofstadter, as
    well as in the recent `Soul' debate) that formal symbolic information
    processing is the same as what the neurophysical machinery of our brains
    does. A digital simulation may simulate our brains, but such an
    isomorphism will fail to cause any real mental experience.  His
    notorious `Chinese Room' experiment from `Minds, Brains, Programs' (too
    long to reproduce here, unless there is future interest) is quite
    entertaining.

    Searle rejects mind/body dualism, attributing equal reality to
    subjective experience and objective entities, much as surface tension is
    as real as the water molecules which cause it: "The mind-brain problem
    is no more of a problem than the digestion-stomach problem".
    Consequently, Skinnerism, which denies any reality to mental experience,
    is an  extreme form of dualism, since it not only artificially splits
    the universe, but it discards half of it.

    Excerpts from `Minds, Brains, Programs':

	The problem with a brain simulator is that it is simulating the
	wrong things about the brain. As long as it only simulates the
	formal structure of the sequence of neural firings at synapses, it
	won't have simulated what matters about the brain, namely its causal
	properties, its ability to produce intentional states.
	...
	No one would suppose that we could produce milk and sugar by running
	a computer simulation of the formal sequences in lactation and
	photosythesis; but where the mind is concerned, many people are
	willing to believe in such a miracle, because of ... a deep and
	abiding dualism: the mind they suppose is a matter of specific
	material causes in a way that milk and sugar are not.

    Another from `Intentionality':

	To say that an agent is conscious of the conditions of satifaction
	of his conscious beliefs and desires is not to say that he has to
	have second order intenional states about his first order states of
	belief and desire. If it were, we would indeed get an infinite
	regress.  Rather, the consciousness of the conditions of
	satisfaction is part of the conscious belief or desire, since the
	intentional content is internal to the states in question.

    Note that his ideas side with neither of the stereotypical {objective,
    materialist, behaviorist, rational, reductionist} or {subjective,
    mentalist, spontaneous, intuitive, holistic} viewpoints typically
    encountered in theories of mind.
     
>>  There is still no way whatsoever to demonstrate that some other
>>  radically different kind of structure might not also work as well --
>>  after all, the only aware entities we are certain of are humans --
>>  although I believe animals and maybe plants share this trait with us, to
>>  varying degrees. 

>There is also no reason to assume that the reason my parakeet is missing and
>there are feathers in my cat's mouth is that nanoscopic aliens from the 23rd
>dimension entered my house, neutralized my cat with a time displacement
>transfuser and digital synthesizer, disintegrated the bird with a Radio
>Shack combination nuclear tambourine/microwave transmiitter/CD player, and
>stuffed real bird feathers in my cat's mouth.  If I have a vested interest
>in "proving" that aliens from the 23rd dimension shop at Radio Shack, I
>might choose to make this assumption.

    You miss the point: 

	Is there something intrinsic in the physical construction of humans
	that is responsible for mind, or can something qualitatively similar
	to mind somehow be `caused' by radically different form and substance?

    I have neither belief nor vested interest in this question.
    Philosophically, however, it is brings up the crucial question Laura
    brought up earlier:

        What are the sources of knowledge?

    Subjective knowledge is dubious because it is not (yet) verifiable,
    but does that mean I did not sneeze twice last tuesday, because nobody saw
    me and all the evidence is gone?

>>  How can we say for certain that an arbitrarily complex heap of chemicals
>>  is not subjectively experiencing conscious awareness (perhaps even
>>  qualitatively similar to our own) even though the result of structures
>>  profoundly different from anything found in ourselves?

>We can't.  Perhaps the rock you tripped over last week was hurt as much as
>you were.

    GRAZIE TANTO!! 
    
    The limits of objective knowledge -- one must BE the entity in question.

>Can you describe the mechanism by which it senses and feels all this?

    Nope. But science forges ever onward, philosophical and psychological
    speculation leading the way.

>>  This issue of kinds of knowledge -- objective vs subjective -- is where
>>  science SETS ITS OWN LIMITS of observation, and it is where I find
>>  strictly materialistic philosophical schemes inadequate for
>>  understanding my own human existence.

>Is the limit "objective vs. subjective", or is it "documentable and
>verifiable vs. non- ..."?  If your hypothetical experimenters had enough
>knowledge to go into your brain and determine the ACCURACY of your
>subjective ideas, and found them based not on facts but on preconceptions,
>what would you say then?

    Preconceptions of what? I do not claim that the interpretation of
    subjective ideas is necessarily true -- obviously they are often not in
    accord with objectively known facts.

    Besides, it's not the ACCURACY of subjective experience that makes them
    real, it's the fact that they were experienced in the first place:

    Example 1: Dennett's nefarious neurosurgeon implants something in my
    	       brain so that whenever he pushes a button, I feel a pain
	       in my toe. The button is pushed:
    
             (TRUE)  I feel toe-like pains.
	     (FALSE) My toe requires medical care.

    Example 2: I believe I secretly control the world using mental telepathy
    	        that implant my wishes into frogs, but I tell nobody.
    	       
             (TRUE)  I have weird secret thoughts.
	     (FALSE) I control the world through frogs.

    The notion of objective vs subjective is examined by Rorty, who, in
    "Philosophy & the Mirror of Nature", imagines an alien planet ("the
    Antipodes") populated by people who do not have minds, although they do
    have complex brains and vaguely respond like humans. They use words like
    `I' for convenience, but lack words such as `feel' or `awareness' that
    describe direct inner mental experiences. 
    
    The very fact that strict behaviorists deny mental states only magnifies
    the issue. We can imagine Skinnerian robots exhibiting complex functions
    without the need for internal states. This secondary phantom world does
    not seem to be logically necessary. But it is not merely an artifice of
    our culture -- every culture has evolved similar notions of an internal
    noumenal world.

>>   Philosophy, the love of wisdom, must never acquiesce to the axioms and
>>   definitions of any single viewpoint -- materialistic or otherwise.
>>   Instead, philosophy should be a tool for mutual comprehension of all
>>   modes of thought.

>Does this mean philosophy should ignore the truth, the realities of the 
>world, in favor of propositions about how some might prefer to see the world?

	It is commonsense and not the ideology of intellectuals that
	determines whether or not something exists, and, if it exists,
	what properties it has -- Paul Feyerabend

    Philosophy, being more flexible than any particular methodology
    (including science) does not decide `the realities of the world',
    at least not in the sense of `true' or `false'.  Philosophical
    speculation transcends any partisan view of the world, and is most
    valuable for investigating the nature of viewpoints in general.

    By nonjudgementally abstracting the definitions, axioms, basic notions,
    and arguments of any given point of view, one might determine if an
    arbitrary statement is true within that system, or how two systems might
    conflict with each other.

    To the extent that the system under study has a rigorous basis, such
    questions can be answered mechanically by logic.  More often, the
    important questions are based on complex issues like semantics, and that
    is one reason philosophy is not reducible to mechanical logic.

    As a means of gaining insight to the realities of alien worldviews,
    philosophy can open one's mind to the true breadth of human existence.

    "Others are so bright and intelligent"

-michael

rlr@pyuxd.UUCP (Rich Rosen) (09/28/85)

>>And I still contend that you make this assumption to reach the conclusion
>>you want regarding brains and minds.  Which is an ass backwards way to
>>think.

>     Oh, I see -- top-down conceptualization is inferior to bottom-up --
>     that's what you seem to be saying. Very few people, (besides Skinnerians)
>     would agree with you that objective dogmatically precedes subjective. 

Dennett comes to the same conclusion that you do:  top-down and bottom-up
are equally viable forms of analysis, he says, but he feels that top-down
will make faster progress.  The problem is that if the overall model chosen
at the top level is flawed, everything that goes down along the way risks the
possibility of contamination.
     
>     Are we only external behavior? Why are we not also internal sensation?
>     Why do you deny what you experience? Skinnerism may be a useful
>     laboratory methodology, but is it the complete truth?

Or is the wishful mysticalism the complete truth?  Or something else?

>     John Searle, who argues that human cognition (in particular
>     intentionality and meaning) is caused by the powerful biochemical
>     machinery of our brains, is the author of wonderfully cynical plain
>     language attacks against the assertions by big name AI folks that
>     human cognitive states are digitally simulatable.
> 
>     In particular, he attacks the notion (also expressed by Hofstadter, as
>     well as in the recent `Soul' debate) that formal symbolic information
>     processing is the same as what the neurophysical machinery of our brains
>     does. A digital simulation may simulate our brains, but such an
>     isomorphism will fail to cause any real mental experience.  His
>     notorious `Chinese Room' experiment from `Minds, Brains, Programs' (too
>     long to reproduce here, unless there is future interest) is quite
>     entertaining.

Is the reason he rejects the notion that the brain could ever be successfully
"duplicated" (perhaps not literally) due to some hard reason that he can put
his finger on, or just some anthropocentric notion that we people just MUST
be different?

>     Searle rejects mind/body dualism, attributing equal reality to
>     subjective experience and objective entities, much as surface tension is
>     as real as the water molecules which cause it: "The mind-brain problem
>     is no more of a problem than the digestion-stomach problem".
>     Consequently, Skinnerism, which denies any reality to mental experience,
>     is an  extreme form of dualism, since it not only artificially splits
>     the universe, but it discards half of it.

Maybe a half that isn't there?

>     Excerpts from `Minds, Brains, Programs':
> 	The problem with a brain simulator is that it is simulating the
> 	wrong things about the brain.

There doesn't even exist a brain simulator, and already it's "simulating the
wrong things"!

> 	As long as it only simulates the
> 	formal structure of the sequence of neural firings at synapses, it
> 	won't have simulated what matters about the brain, namely its causal
> 	properties, its ability to produce intentional states.  ...

How could that be?  Why isn't that property a part of the "sequence of neural
firings of synapses"?

> 	No one would suppose that we could produce milk and sugar by running
> 	a computer simulation of the formal sequences in lactation and
> 	photosythesis; but where the mind is concerned, many people are
> 	willing to believe in such a miracle, because of ... a deep and
> 	abiding dualism: the mind they suppose is a matter of specific
> 	material causes in a way that milk and sugar are not.

Why is this a valid analogy?  If the right materials are used and the right
processes are achieved, those things can certainly be produced.  Why (again)
the brain/mind as exception to the rule?

>     Another from `Intentionality':
> 	To say that an agent is conscious of the conditions of satifaction
> 	of his conscious beliefs and desires is not to say that he has to
> 	have second order intenional states about his first order states of
> 	belief and desire. If it were, we would indeed get an infinite
> 	regress.  Rather, the consciousness of the conditions of
> 	satisfaction is part of the conscious belief or desire, since the
> 	intentional content is internal to the states in question.

So, because this would represent an "infinite regress" (by this
interpretation), it simply cannot be?

>>>  There is still no way whatsoever to demonstrate that some other
>>>  radically different kind of structure might not also work as well --
>>>  after all, the only aware entities we are certain of are humans --
>>>  although I believe animals and maybe plants share this trait with us, to
>>>  varying degrees. 

>>There is also no reason to assume that the reason my parakeet is missing and
>>there are feathers in my cat's mouth is that nanoscopic aliens from the 23rd
>>dimension entered my house, neutralized my cat with a time displacement
>>transfuser and digital synthesizer, disintegrated the bird with a Radio
>>Shack combination nuclear tambourine/microwave transmiitter/CD player, and
>>stuffed real bird feathers in my cat's mouth.  If I have a vested interest
>>in "proving" that aliens from the 23rd dimension shop at Radio Shack, I
>>might choose to make this assumption.

>     You miss the point: 
> 	Is there something intrinsic in the physical construction of humans
> 	that is responsible for mind, or can something qualitatively similar
> 	to mind somehow be `caused' by radically different form and substance?

Again, work backwards from the anthropocentric conclusion that "we are
different", and then justify everything that "precedes" (postcedes?) it
logically.  It is you who has missed the point.

>     I have neither belief nor vested interest in this question.

This is why you write 1000 lines a week on the subject.

>     Subjective knowledge is dubious because it is not (yet) verifiable,
>     but does that mean I did not sneeze twice last tuesday, because nobody saw
>     me and all the evidence is gone?

The likelihood of your having sneezed is both reasonable (sneezing is a
common enough phenomena) and practically irrelevant to the larger scope of
things (if you sneeze, does it radically change the universe---not its course
of destiny,  a la "for want of a nail...", but its whole nature?).  On the
other hand, those phenomena we have been talking about are both "unreasonable"
(from the perspective of evidence) and very relevant to the whole nature of
things.  It is for that reason that it is very different indeed.

>>>  How can we say for certain that an arbitrarily complex heap of chemicals
>>>  is not subjectively experiencing conscious awareness (perhaps even
>>>  qualitatively similar to our own) even though the result of structures
>>>  profoundly different from anything found in ourselves?

>>We can't.  Perhaps the rock you tripped over last week was hurt as much as
>>you were.

>     GRAZIE TANTO!! 
    
You're welcome, kimosabe.

>     The limits of objective knowledge -- one must BE the entity in question.

I take it you took this point seriously.  OK.  Given that we find no physical
(observable) mechanism within rocks for performing brain functions, if indeed
they do, then it would be a property inherent in all things:  rocks, plants,
animals (including humans), shoes, terminals, printouts, etc.  If you accept
this, fine.  Then go to Radio Shack and look for aliens from the 23rd dimension.

>>Can you describe the mechanism by which it senses and feels all this?

>     Nope. But science forges ever onward, philosophical and psychological
>     speculation leading the way.

These sorts of scientists also hang out at Radio Shack for the aforementioned
purpose.

>>>  This issue of kinds of knowledge -- objective vs subjective -- is where
>>>  science SETS ITS OWN LIMITS of observation, and it is where I find
>>>  strictly materialistic philosophical schemes inadequate for
>>>  understanding my own human existence.

>>Is the limit "objective vs. subjective", or is it "documentable and
>>verifiable vs. non- ..."?  If your hypothetical experimenters had enough
>>knowledge to go into your brain and determine the ACCURACY of your
>>subjective ideas, and found them based not on facts but on preconceptions,
>>what would you say then?

>     Preconceptions of what? I do not claim that the interpretation of
>     subjective ideas is necessarily true -- obviously they are often not in
>     accord with objectively known facts.
>     Besides, it's not the ACCURACY of subjective experience that makes them
>     real, it's the fact that they were experienced in the first place:
>     Example 1: Dennett's nefarious neurosurgeon implants something in my
>     	       brain so that whenever he pushes a button, I feel a pain
> 	       in my toe. The button is pushed:
>              (TRUE)  I feel toe-like pains.
>  	       (FALSE) My toe requires medical care.

The difference is that you claim some correlation between such experiences
that cannot be verified and the world outside the brain/body.  Again,
we've been through the phantom pain bit, and this is the very reason why
subjective "knowledge" (quotes most necessary) is not acceptable when it
comes to claims about the rest of the world.  Also, you could be Laura being
infuriating again.

>     Example 2: I believe I secretly control the world using mental telepathy
>     	        that implant my wishes into frogs, but I tell nobody.
    	       
You just did.  The jig is up.  Be a solipsist, change the world.

>     The notion of objective vs subjective is examined by Rorty, who, in
>     "Philosophy & the Mirror of Nature", imagines an alien planet ("the
>     Antipodes") populated by people who do not have minds, although they do
>     have complex brains and vaguely respond like humans. They use words like
>     `I' for convenience, but lack words such as `feel' or `awareness' that
>     describe direct inner mental experiences. 
    
Hmmm, but they are (of course) "different" from humans?

>     The very fact that strict behaviorists deny mental states only magnifies
>     the issue. We can imagine Skinnerian robots exhibiting complex functions
>     without the need for internal states. This secondary phantom world does
>     not seem to be logically necessary. But it is not merely an artifice of
>     our culture -- every culture has evolved similar notions of an internal
>     noumenal world.

So?  Every culture has had religions.  Every culture (up until recently)
has held slaves and made wars.  So?

>>>   Philosophy, the love of wisdom, must never acquiesce to the axioms and
>>>   definitions of any single viewpoint -- materialistic or otherwise.
>>>   Instead, philosophy should be a tool for mutual comprehension of all
>>>   modes of thought.

>>Does this mean philosophy should ignore the truth, the realities of the 
>>world, in favor of propositions about how some might prefer to see the world?

> 	It is commonsense and not the ideology of intellectuals that
> 	determines whether or not something exists, and, if it exists,
> 	what properties it has -- Paul Feyerabend

"Well, consider the very roots of our ability to discern truth.  Above all (or
 perhaps I should say 'underneath all'), *common sense* is what we depend on
 ---that crazily elusive, ubiquitous faculty we all have, to some degree or
 other. But not to a degree such as 'Bachelor's' or 'Ph.D.'. No, unfortunately,
 universities do not offer degrees in Common Sense. This is, in a way, a pity.

"Given that common sense is [SUPPOSEDLY] common, why have a department devoted
 to it? My answer would be quite simple:  In our lives we are continually
 encountering strange new situations in which we have to figure out how to
 apply what we already know. It is not enough to have common sense about known
 situations; we need also to develop the art of extending common sense to apply
 to apply to [NEW] situations. ... Common sense, once it starts to roll,
 gathers more common sense, like a rolling snowball gathering ever more snow.
 Or, to switch metaphors, if we apply common sense to itself over and over
 again, we wind up building a skyscraper. The ground floor of this skyscraper
 is the ordinary common sense we all have, and the rules for building new
 floors are implicit in the ground floor itself. ...

 Pretty soon, even though it has all been built up from common ingredients, the
 structure of this extended common sense is quite arcane and elusive. We might
 call ths quality represented by the upper floors of the skyscraper "rare
 sense", but it is usually called [ARE YOU READY, ELLIS?] "science". And some
 of the ideas and discoveries that have come out of this ... ability defy the
 ground floor totally. The ideas of relativity and quantum mechanics are
 anything but commonsensical, in the ground floor sense of the term!  They are
 outcomes of common sense self-applied.  [DOES THIS MEAN, BY THE QUOTE ELLIS
 USES ABOVE THAT HE WOULD THROW OUT THE ANTI-COMMONSENSICAL QM, THUS
 EFFICIENTLY DISPOSING OF ALL HIS LITTLE THEORIES?]"
		---excerpted from "World Views in Collision:  The Skeptical
			Inquirer vs. the National Enquirer" by D. Hofstadter,
			SciAm 2/82, reproduced in Metamagical Themas

>     Philosophy, being more flexible than any particular methodology
>     (including science) does not decide `the realities of the world',
>     at least not in the sense of `true' or `false'.  Philosophical
>     speculation transcends any partisan view of the world, and is most
>     valuable for investigating the nature of viewpoints in general.

Thank you for answering my question by not answering it.  Philosophy is a
wide ranging discipline, in which some people are interested in finding truth,
but where others could care less about truth and are more interested in
rationalizing opinions by creating new axioms from scratch to make the
conclusions of those opinions valid.

>     By nonjudgementally abstracting the definitions, axioms, basic notions,
>     and arguments of any given point of view, one might determine if an
>     arbitrary statement is true within that system, or how two systems might
>     conflict with each other.

And by examining whether the axioms of a given system represent accurately
the real world, we can find out the validity (or non-validity) of that
system.

>     To the extent that the system under study has a rigorous basis, such
>     questions can be answered mechanically by logic.  More often, the
>     important questions are based on complex issues like semantics, and that
>     is one reason philosophy is not reducible to mechanical logic.

In fact, philosophers (it seems) sometimes build their own definitions of
semantics and language, so that they can, in a meta- sense, manipulate the
very "veracity" of their own axioms.
-- 
Popular consensus says that reality is based on popular consensus.
						Rich Rosen   pyuxd!rlr

tmoody@sjuvax.UUCP (T. Moody) (10/06/85)

>>     Are we only external behavior? Why are we not also internal sensation?
>>     Why do you deny what you experience? Skinnerism may be a useful
>>     laboratory methodology, but is it the complete truth?[Ellis]
>
>Or is the wishful mysticalism the complete truth?  Or something else?[Rosen]
This is hardly an answer to the question.

>>     John Searle, who argues that human cognition (in particular
>>     intentionality and meaning) is caused by the powerful biochemical
>>     machinery of our brains, is the author of wonderfully cynical plain
>>     language attacks against the assertions by big name AI folks that
>>     human cognitive states are digitally simulatable. [Ellis]
>> 
>> 	As long as it only simulates the
>> 	formal structure of the sequence of neural firings at synapses, it
>> 	won't have simulated what matters about the brain, namely its causal
>> 	properties, its ability to produce intentional states. [Searle, quoted
        by Ellis]
>
>How could that be?  Why isn't that property a part of the "sequence of neural
>firings of synapses"? [Rosen]

Read it again.  Searle is arguing that the "causal properties" of the brain
are not obtained by merely simulating the FORMAL STRUCTURE of the neural
activity.  By "formal structure", Searle means something quite precise.
The formal structure of the brain is the Turing machine algorithm that
it is instantiating.  This formal structure is what functionalists claim
is the essence of mind; to be in a mental state is to instantiate a
Turing program-type.  If you miss the "formal structure" part, that would
lead you to a naive version of the Identity Thesis: mental states are
just brain states.  This thesis entails, of course, that only brains
(and not computers, for example) could have mental states.  If you
believe that other systems could have mental states, then you are
asserting that these other systems have something in common with brains,
namely their formal structure.  Searle argues against both functionalism
and the identity thesis (and dualism, as Ellis correctly pointed out).

>> 	No one would suppose that we could produce milk and sugar by running
>> 	a computer simulation of the formal sequences in lactation and
>> 	photosythesis; but where the mind is concerned, many people are
>> 	willing to believe in such a miracle, because of ... a deep and
>> 	abiding dualism: the mind they suppose is a matter of specific
>> 	material causes in a way that milk and sugar are not.
>
>Why is this a valid analogy?  If the right materials are used and the right
>processes are achieved, those things can certainly be produced.  Why (again)
>the brain/mind as exception to the rule? [Rosen]

The whole point of functionalism is that mental states are substrate-
independent; that they can be simulated -- AND HENCE INSTANTIATED --
without respect to the physical ingredients of the system.  There
are not, according to functionalists, supposed to be any "right
materials" for minds.
>>     Another from `Intentionality':
>> 	To say that an agent is conscious of the conditions of satifaction
>> 	of his conscious beliefs and desires is not to say that he has to
>> 	have second order intenional states about his first order states of
>> 	belief and desire. If it were, we would indeed get an infinite
>> 	regress.  Rather, the consciousness of the conditions of
>> 	satisfaction is part of the conscious belief or desire, since the
>> 	intentional content is internal to the states in question. [Searle,
        quoted by Ellis]

>
>So, because this would represent an "infinite regress" (by this
>interpretation), it simply cannot be? [Rosen]

It would apparently require an infinite nesting of discrete states in
a physical system, which is certainly unlikely.

>>     The very fact that strict behaviorists deny mental states only magnifies
>>     the issue. We can imagine Skinnerian robots exhibiting complex functions
>>     without the need for internal states. This secondary phantom world does
>>     not seem to be logically necessary. But it is not merely an artifice of
>>     our culture -- every culture has evolved similar notions of an internal
>>     noumenal world. [Ellis, I think]

>So?  Every culture has had religions.  Every culture (up until recently)
>has held slaves and made wars.  So?

What is the point of this rejoinder?  Do you deny that there is such a
thing as internal subjective experience?

>"Given that common sense is [SUPPOSEDLY] common, why have a department devoted
> to it? My answer would be quite simple:  In our lives we are continually
> encountering strange new situations in which we have to figure out how to
> apply what we already know. It is not enough to have common sense about known
> situations; we need also to develop the art of extending common sense to apply
> to apply to [NEW] situations. ... Common sense, once it starts to roll,
> gathers more common sense, like a rolling snowball gathering ever more snow.
> Or, to switch metaphors, if we apply common sense to itself over and over
> again, we wind up building a skyscraper. The ground floor of this skyscraper
> is the ordinary common sense we all have, and the rules for building new
> floors are implicit in the ground floor itself. ...
>
> Pretty soon, even though it has all been built up from common ingredients, the
> structure of this extended common sense is quite arcane and elusive. We might
> call ths quality represented by the upper floors of the skyscraper "rare
> sense", but it is usually called [ARE YOU READY, ELLIS?] "science". And some
> of the ideas and discoveries that have come out of this ... ability defy the
> ground floor totally. The ideas of relativity and quantum mechanics are
> anything but commonsensical, in the ground floor sense of the term!  They are
> outcomes of common sense self-applied.  [DOES THIS MEAN, BY THE QUOTE ELLIS
> USES ABOVE THAT HE WOULD THROW OUT THE ANTI-COMMONSENSICAL QM, THUS
> EFFICIENTLY DISPOSING OF ALL HIS LITTLE THEORIES?]"
>		---excerpted from "World Views in Collision:  The Skeptical
>			Inquirer vs. the National Enquirer" by D. Hofstadter,
>			SciAm 2/82, reproduced in Metamagical Themas
                        [quoted by Rosen]

Don't forget that common sense is completely grounded in subjectivity -- 
the way things *seem* to creatures such as ourselves.  "Common sense" is
a term that we use to dignify our most entrenched prejudices about reality.
Science indeed builds upon this most subjective foundation.  The problems
emerge when the results at the top of the skyscraper *contradict* the
"given" of common sense upon which the whole structure is built.  QM is a
fine example of this, and so is Russell's Paradox.

>In fact, philosophers (it seems) sometimes build their own definitions of
>semantics and language, so that they can, in a meta- sense, manipulate the
>very "veracity" of their own axioms. [Rosen]

As I've said before, the meanings of philosophically interesting terms, such
as "free will" are not "given" in any univocal source.  These terms are used
in various contexts to talk about subject matters that are of interest to
people.  The philosopher's mission, should he decide to accept it, is to
*discover* the definition(s) most consistent with the current state of
knowledge.  You have been challenged many times to show that (you know,
give an argument) that yours is the only legitimate definition of
"free will."  You've yet to do so.

Todd Moody       {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody
Philosophy Department
St. Joseph's U.
Philadelphia, PA   19131

rlr@pyuxd.UUCP (Rich Rosen) (10/08/85)

>>>     Are we only external behavior? Why are we not also internal sensation?
>>>     Why do you deny what you experience? Skinnerism may be a useful
>>>     laboratory methodology, but is it the complete truth?[Ellis]

>>Or is the wishful mysticalism the complete truth?  Or something else?[Rosen]

> This is hardly an answer to the question.  [MOODY]

Nor is your statement.  But I was unaware that such rhetorical questions
(as mine are, too) required answers.

>>> 	As long as it only simulates the
>>> 	formal structure of the sequence of neural firings at synapses, it
>>> 	won't have simulated what matters about the brain, namely its causal
>>> 	properties, its ability to produce intentional states. [Searle, quoted
>>>     by Ellis]

>>How could that be?  Why isn't that property a part of the "sequence of neural
>>firings of synapses"? [Rosen]

> Read it again.  Searle is arguing that the "causal properties" of the brain
> are not obtained by merely simulating the FORMAL STRUCTURE of the neural
> activity.  By "formal structure", Searle means something quite precise.
> The formal structure of the brain is the Turing machine algorithm that
> it is instantiating.  This formal structure is what functionalists claim
> is the essence of mind; to be in a mental state is to instantiate a
> Turing program-type.  If you miss the "formal structure" part, that would
> lead you to a naive version of the Identity Thesis: mental states are
> just brain states.  This thesis entails, of course, that only brains
> (and not computers, for example) could have mental states.  If you
> believe that other systems could have mental states, then you are
> asserting that these other systems have something in common with brains,
> namely their formal structure.  Searle argues against both functionalism
> and the identity thesis (and dualism, as Ellis correctly pointed out).

It sounds mighty presumptive to assert that computers could not be designed
to accurately simulate all of this.  Why not?  Why do some people INSIST
with such vigor that there MUST be something about the brain that IS ipso
facto different that can NEVER be reproduced.  It's working backwards from
a conclusion, it's wishful thinking, it's totally unfounded in fact, it's
boldly and blatantly anthropocentric.

>>> 	No one would suppose that we could produce milk and sugar by running
>>> 	a computer simulation of the formal sequences in lactation and
>>> 	photosythesis; but where the mind is concerned, many people are
>>> 	willing to believe in such a miracle, because of ... a deep and
>>> 	abiding dualism: the mind they suppose is a matter of specific
>>> 	material causes in a way that milk and sugar are not.

>>Why is this a valid analogy?  If the right materials are used and the right
>>processes are achieved, those things can certainly be produced.  Why (again)
>>the brain/mind as exception to the rule? [Rosen]

> The whole point of functionalism is that mental states are substrate-
> independent; that they can be simulated -- AND HENCE INSTANTIATED --
> without respect to the physical ingredients of the system.  There
> are not, according to functionalists, supposed to be any "right
> materials" for minds.

First, I see no reason why others would put such a restriction on functionalism
as this.  Just as you would find it mighty hard to make a computer out of
swiss cheese (though certain companies that shall rename mainless make a good
go of it), certain materials are applicable to different tasks.  The question
is along the lines of "what would it mean to 'simulate' milk?"  If certain
actions occur owing to the material, can those actions be simulated?  I
say "why not?"

>>> 	To say that an agent is conscious of the conditions of satifaction
>>> 	of his conscious beliefs and desires is not to say that he has to
>>> 	have second order intenional states about his first order states of
>>> 	belief and desire. If it were, we would indeed get an infinite
>>> 	regress.  Rather, the consciousness of the conditions of
>>> 	satisfaction is part of the conscious belief or desire, since the
>>> 	intentional content is internal to the states in question. [Searle,
>>>     quoted by Ellis]

>>So, because this would represent an "infinite regress" (by this
>>interpretation), it simply cannot be? [Rosen]

> It would apparently require an infinite nesting of discrete states in
> a physical system, which is certainly unlikely.

Or recursion and self-referentiality.

>>>   The very fact that strict behaviorists deny mental states only magnifies
>>>   the issue. We can imagine Skinnerian robots exhibiting complex functions
>>>   without the need for internal states. This secondary phantom world does
>>>   not seem to be logically necessary. But it is not merely an artifice of
>>>   our culture -- every culture has evolved similar notions of an internal
>>>   noumenal world. [Ellis, I think]

>>So?  Every culture has had religions.  Every culture (up until recently)
>>has held slaves and made wars.  So?

> What is the point of this rejoinder?  Do you deny that there is such a
> thing as internal subjective experience?

The point is rather clear, I would think.  Just because every culture has
evolved similar notions does not make those notions valid.  (This sounds a
lot like arguing with those who insist that conformity is "intrinsic" because
every culture has valued it.)

>>"Given that common sense is [SUPPOSEDLY] common, why have a department devoted
>>to it? My answer would be quite simple:  In our lives we are continually
>>encountering strange new situations in which we have to figure out how to
>>apply what we already know. It is not enough to have common sense about known
>>situations; we need also to develop the art of extending common sense to apply
>>to apply to [NEW] situations. ... Common sense, once it starts to roll,
>>gathers more common sense, like a rolling snowball gathering ever more snow.
>>Or, to switch metaphors, if we apply common sense to itself over and over
>>again, we wind up building a skyscraper. The ground floor of this skyscraper
>>is the ordinary common sense we all have, and the rules for building new
>>floors are implicit in the ground floor itself. ...
>>Pretty soon, even though it has all been built up from common ingredients, the
>>structure of this extended common sense is quite arcane and elusive. We might
>>call ths quality represented by the upper floors of the skyscraper "rare
>>sense", but it is usually called [ARE YOU READY, ELLIS?] "science". And some
>>of the ideas and discoveries that have come out of this ... ability defy the
>>ground floor totally. The ideas of relativity and quantum mechanics are
>>anything but commonsensical, in the ground floor sense of the term!  They are
>>outcomes of common sense self-applied.  [DOES THIS MEAN, BY THE QUOTE ELLIS
>>USES ABOVE THAT HE WOULD THROW OUT THE ANTI-COMMONSENSICAL QM, THUS
>>EFFICIENTLY DISPOSING OF ALL HIS LITTLE THEORIES?]"
>>		---excerpted from "World Views in Collision:  The Skeptical
>>			Inquirer vs. the National Enquirer" by D. Hofstadter,
>>			SciAm 2/82, reproduced in Metamagical Themas
                        [quoted by Rosen]

> Don't forget that common sense is completely grounded in subjectivity -- 
> the way things *seem* to creatures such as ourselves.  "Common sense" is
> a term that we use to dignify our most entrenched prejudices about reality.
> Science indeed builds upon this most subjective foundation.  The problems
> emerge when the results at the top of the skyscraper *contradict* the
> "given" of common sense upon which the whole structure is built.  QM is a
> fine example of this, and so is Russell's Paradox.

Oh, really?  Is common sense really a series of prejudices about reality?
Or is it an examination of the things are, how certain things behave and how
other things may get in the way of accurate observation, and what should be
done about this (coupled with elementary logic)?

>>In fact, philosophers (it seems) sometimes build their own definitions of
>>semantics and language, so that they can, in a meta- sense, manipulate the
>>very "veracity" of their own axioms. [Rosen]

> As I've said before, the meanings of philosophically interesting terms, such
> as "free will" are not "given" in any univocal source.  These terms are used
> in various contexts to talk about subject matters that are of interest to
> people.  The philosopher's mission, should he decide to accept it, is to
> *discover* the definition(s) most consistent with the current state of
> knowledge.  You have been challenged many times to show that (you know,
> give an argument) that yours is the only legitimate definition of
> "free will."  You've yet to do so.

Mine.  The one *I* made up.  Yes, mine.  Rather than the ones that people in
this newsgroup (along with their choice philosophers) have made up in
opposition to what people have meant by the term in order to "get" a result
to "exist".  Do you remember when Paul Torek asked me to "prove" that there
was ANYONE, anyone at all in the world, who held "my" definition of free will
and understood the implications I claimed that definition had?  How ironic
that the first person (actually, the second) I spoke to said (without even
being asked directly) "but isn't free will the notion that your choice are
not determined by your chemistry, thus implying a soul of some sort?"  And
so did most other people I asked who cared to offer any opinion at all on
the subject.  No, popular consensus does not determine the facts about
physical reality, but popular consensus DOES most certainly determine the
meanings of words in language.  Are you claiming that "free will" is a
"technical" term, in the way that scientists might designate a technical
definition for the word "charm" (as related to quarks)?  Did the scientists
in so doing ALTER the existing popular definition of the word "charm"?  Take
the time to recognize that that is EXACTLY what you "philosophers" are trying
to do.  Language belongs to the people, not to select groups in ivory towers
who change meanings of words at their whim without regard for how words are
used by PEOPLE.

It's a good thing I am not where my copy of Annotated Alice is, otherwise
I'd waste time copying yet another set of interesting comments about the
HumptyDumpty tale that will be roundly ignored a second time.
-- 
Anything's possible, but only a few things actually happen.
					Rich Rosen    pyuxd!rlr

tmoody@sjuvax.UUCP (T. Moody) (10/12/85)

In article <1858@pyuxd.UUCP> rlr@pyuxd.UUCP (Rich Rosen) writes:

As long as it only simulates the
formal structure of the sequence of neural firings at synapses, it
won't have simulated what matters about the brain, namely its causal
properties, its ability to produce intentional states. [Searle, quoted
by Ellis]

>>>How could that be?  Why isn't that property a part of the "sequence of neural
>>>firings of synapses"? [Rosen]
>
>> Read it again.  Searle is arguing that the "causal properties" of the brain
>> are not obtained by merely simulating the FORMAL STRUCTURE of the neural
>> activity.  By "formal structure", Searle means something quite precise.
>> The formal structure of the brain is the Turing machine algorithm that
>> it is instantiating.  This formal structure is what functionalists claim
>> is the essence of mind; to be in a mental state is to instantiate a
>> Turing program-type.  If you miss the "formal structure" part, that would
>> lead you to a naive version of the Identity Thesis: mental states are
>> just brain states.  This thesis entails, of course, that only brains
>> (and not computers, for example) could have mental states.  If you
>> believe that other systems could have mental states, then you are
>> asserting that these other systems have something in common with brains,
>> namely their formal structure.  Searle argues against both functionalism
>> and the identity thesis (and dualism, as Ellis correctly pointed out).
>
>It sounds mighty presumptive to assert that computers could not be designed
>to accurately simulate all of this.  Why not?  Why do some people INSIST
>with such vigor that there MUST be something about the brain that IS ipso
>facto different that can NEVER be reproduced.  It's working backwards from
>a conclusion, it's wishful thinking, it's totally unfounded in fact, it's
>boldly and blatantly anthropocentric.

First of all, Searle doesn't insist that mental states cannot be simulated;
nor did I claim that he did.  What Searle claims is that simulating the
formal structure of a brain is not a sufficient condition for a thing's
being a mind.  And he doesn't just claim it; he *argues* for the con-
clusion.  Clearly, you do not know Searle's work; that does not stop
you from ranting about his motivations.  Let me just explain this much
about Searle's argument: he distinguishes the brain's "causal powers"
from its "formal structure".  The causal powers of an entity include its
complete biological repertoire, contingent upon its status as an evolved
entity.  Searle's point is that mental states depend upon these causal
powers and are thus not brought about by a mere formal structure
simulation.  Searle has not claimed that there is something about the
brain that can never be reproduced.  He has only argued that by
simulating its formal structure you haven't done enough to get
what could be called a mind.

>>>> 	No one would suppose that we could produce milk and sugar by running
>>>> 	a computer simulation of the formal sequences in lactation and
>>>> 	photosythesis; but where the mind is concerned, many people are
>>>> 	willing to believe in such a miracle, because of ... a deep and
>>>> 	abiding dualism: the mind they suppose is a matter of specific
>>>> 	material causes in a way that milk and sugar are not.

>> The whole point of functionalism is that mental states are substrate-
>> independent; that they can be simulated -- AND HENCE INSTANTIATED --
>> without respect to the physical ingredients of the system.  There
>> are not, according to functionalists, supposed to be any "right
>> materials" for minds. [Moody]
>
>First, I see no reason why others would put such a restriction on functionalism
>as this.  Just as you would find it mighty hard to make a computer out of
>swiss cheese (though certain companies that shall rename mainless make a good
>go of it), certain materials are applicable to different tasks.  The question
>is along the lines of "what would it mean to 'simulate' milk?"  If certain
>actions occur owing to the material, can those actions be simulated?  I
>say "why not?" [Rosen]

This is not about engineering.  In principle, you can make a computer out
of swiss cheese.  In fact, its cholesterol content is probably comparable
to that of the human brain (don't worry, that's just a bit of irrelevant
trivia).  I think we understand each other on this point.

>>>> 	To say that an agent is conscious of the conditions of satifaction
>>>> 	of his conscious beliefs and desires is not to say that he has to
>>>> 	have second order intenional states about his first order states of
>>>> 	belief and desire. If it were, we would indeed get an infinite
>>>> 	regress.  Rather, the consciousness of the conditions of
>>>> 	satisfaction is part of the conscious belief or desire, since the
>>>> 	intentional content is internal to the states in question. [Searle,
>>>>     quoted by Ellis]
>
>>>So, because this would represent an "infinite regress" (by this
>>>interpretation), it simply cannot be? [Rosen]
>
>> It would apparently require an infinite nesting of discrete states in
>> a physical system, which is certainly unlikely.
>
>Or recursion and self-referentiality.

If you have a infinte nesting of feedback loops in a system, you have
more than just recursion or self-reference.  You've got problems.

>>>>   The very fact that strict behaviorists deny mental states only magnifies
>>>>   the issue. We can imagine Skinnerian robots exhibiting complex functions
>>>>   without the need for internal states. This secondary phantom world does
>>>>   not seem to be logically necessary. But it is not merely an artifice of
>>>>   our culture -- every culture has evolved similar notions of an internal
>>>>   noumenal world. [Ellis, I think]
>
>>>So?  Every culture has had religions.  Every culture (up until recently)
>>>has held slaves and made wars.  So? [Rosen]
>
>> What is the point of this rejoinder?  Do you deny that there is such a
>> thing as internal subjective experience? [Moody]
>
>The point is rather clear, I would think.  Just because every culture has
>evolved similar notions does not make those notions valid. [Rosen]

Yes, *that* point is clear.  But why do you bring it up?  Is it because
you deny that there is such a thing as subjective experience.  Ellis's
only point here is that the concept of an "inner" or "mental" life is
not simply a by-product of a particular culture.  Do you deny this? 

>> Don't forget that common sense is completely grounded in subjectivity -- 
>> the way things *seem* to creatures such as ourselves.  "Common sense" is
>> a term that we use to dignify our most entrenched prejudices about reality.
>> Science indeed builds upon this most subjective foundation.  The problems
>> emerge when the results at the top of the skyscraper *contradict* the
>> "given" of common sense upon which the whole structure is built.  QM is a
>> fine example of this, and so is Russell's Paradox. [Moody]
>
>Oh, really?  Is common sense really a series of prejudices about reality?
>Or is it an examination of the things are, how certain things behave and how
>other things may get in the way of accurate observation, and what should be
>done about this (coupled with elementary logic)? [Rosen]

Is this another rhetorical question?  Never mind; I'll answer it.  I
would say that common sense *is* a series of prejudices about reality.
I thank you for throwing in the word "series", as it helps to suggest
that common sense is not a static set of beliefs.  Somebody on this
net has a signature quote that is: "Common sense is what tells you
that a ten pound weight falls ten times faster than a one pound
weight."  The flatness of the earth was once common sense, as was the
axiomatizability of arithmetic (at least this was the common sense of
uncommon people: mathematicians).  Determinism is common sense.  When
scientific results contradict common sense (e.g., twin paradox), then
something must yield.  Frequently, it is common sense.  Hence, the
term "prejudice".

>>>In fact, philosophers (it seems) sometimes build their own definitions of
>>>semantics and language, so that they can, in a meta- sense, manipulate the
>>>very "veracity" of their own axioms. [Rosen]
>
>> As I've said before, the meanings of philosophically interesting terms, such
>> as "free will" are not "given" in any univocal source.  These terms are used
>> in various contexts to talk about subject matters that are of interest to
>> people.  The philosopher's mission, should he decide to accept it, is to
>> *discover* the definition(s) most consistent with the current state of
>> knowledge.  You have been challenged many times to show that (you know,
>> give an argument) that yours is the only legitimate definition of
>> "free will."  You've yet to do so. [Moody]
>
>Mine.  The one *I* made up.  Yes, mine.  Rather than the ones that people in
>this newsgroup (along with their choice philosophers) have made up in
>opposition to what people have meant by the term in order to "get" a result
>to "exist".  Do you remember when Paul Torek asked me to "prove" that there
>was ANYONE, anyone at all in the world, who held "my" definition of free will
>and understood the implications I claimed that definition had?  How ironic
>that the first person (actually, the second) I spoke to said (without even
>being asked directly) "but isn't free will the notion that your choice are
>not determined by your chemistry, thus implying a soul of some sort?"  And
>so did most other people I asked who cared to offer any opinion at all on
>the subject.  No, popular consensus does not determine the facts about
>physical reality, but popular consensus DOES most certainly determine the
>meanings of words in language.  Are you claiming that "free will" is a
>"technical" term, in the way that scientists might designate a technical
>definition for the word "charm" (as related to quarks)?  Did the scientists
>in so doing ALTER the existing popular definition of the word "charm"?  Take
>the time to recognize that that is EXACTLY what you "philosophers" are trying
>to do.  Language belongs to the people, not to select groups in ivory towers
>who change meanings of words at their whim without regard for how words are
>used by PEOPLE.[Rosen]

I have never doubted that YOURS (the one you use) is one historically
salient definition of "free will."  I deny that it is the *only* one.
Popular consensus detrmines some of the meanings of some words, I
agree.  But even there, you are on thin ice.  One very popular meaning
of "free will" is simply "absence of direct external coercion or
constraint" (as opposed to your "absence of *all* physical
determinants).  This is why it makes sense to ask "Did you resign of
your own free will, or were you forced to?"  Nobody who asks such a
question could have your definition in mind.  In addition, it is a
matter of record that Hume, Locke, Leibniz and Hobbes explicitly
rejected your kind of definition of "free will".  Does that make your
definition "wrong" and their definitions "right"?  No.  But it makes
your claim that yours is the only possible, meaningful or legitimate
definition wrong.  We philosophers -- and I wish I could count you
among us -- do not change language at our whim; we examine it in order
to learn better ways to talk about the world.


Todd Moody       {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody
Philosophy Department
St. Joseph's U.
Philadelphia, PA   19131

dmcanzi@watdcsu.UUCP (David Canzi) (10/12/85)

In article <2298@sjuvax.UUCP> tmoody@sjuvax.UUCP (T. Moody) writes:
>>>     John Searle, who argues that human cognition (in particular
>>>     intentionality and meaning) is caused by the powerful biochemical
>>>     machinery of our brains, is the author of wonderfully cynical plain
>>>     language attacks against the assertions by big name AI folks that
>>>     human cognitive states are digitally simulatable. [Ellis]
>>> 
>>> 	As long as it only simulates the
>>> 	formal structure of the sequence of neural firings at synapses, it
>>> 	won't have simulated what matters about the brain, namely its causal
>>> 	properties, its ability to produce intentional states. [Searle, quoted
>        by Ellis]

"formal structure... causal properties... intentional states..."
Plain language?

>                Searle is arguing that the "causal properties" of the brain
>are not obtained by merely simulating the FORMAL STRUCTURE of the neural
>activity.  By "formal structure", Searle means something quite precise.
>The formal structure of the brain is the Turing machine algorithm that
>it is instantiating.  This formal structure is what functionalists claim
>is the essence of mind; to be in a mental state is to instantiate a
>Turing program-type....  Searle argues against... functionalism...
>
>The whole point of functionalism is that mental states are substrate-
>independent; that they can be simulated -- AND HENCE INSTANTIATED --
>without respect to the physical ingredients of the system.

All I know about John Searle and functionalism is what I gather about
them from the above quotes.  I don't know what Searle means by "causal
properties" and "intentional states".  But I'm not one to let ignorance
prevent me from speaking.

Human brains don't exist in isolation -- they come with a body
attached.  The environment affects the sensory organs, causing them to
transmit signals that alter the brain's state.  The brain transmits
signals to the motor organs, which are capable of altering the
environment -- hopefully in the way the brain intends. So the
environment affects the brain and the brain affects the environment.
Interacting with an environment is something that Turing machines
don't do.  So, it follows that the brain does not instantiate a Turing
machine.

It doesn't follow from this that the brain can't be simulated.  Storing
just a representation of the brain on a Turing machine tape is clearly
misguided.  But the above argument doesn't rule out the possibility of
storing representations of the brain, and the relevant parts of the body
and the environment on a Turing machine tape, and simulating all three
together.  Ie. maybe a Turing machine can simulate the brain, but only
by simulating the things that the brain interacts with as well.

The idea that the brain can be simulated does not imply that the brain
instantiates a Turing machine.  If Searle's argument depends on this
implication being true, then it is flawed.
-- 
David Canzi

There are too many thick books about thin subjects.

tedrick@ucbernie.BERKELEY.EDU (Tom Tedrick) (10/14/85)

>Ie. maybe a Turing machine can simulate the brain, but ...

OK, here is a question.

My understanding is that Godel's incompleteness theorems prove
(assuming the consistency of Arithmetic) that no Turing machine
can possibly simulate the human mind.

This is because for any particular Turing machine there are certain
statements that the human mind can recognize as true (again with
the consistency assumption), that the machine cannot recognize
as true.

Does anyone dispute this?

   -Tom
    tedrick@ucbernie.ARPA

torek@umich.UUCP (Paul V. Torek ) (10/14/85)

In article <10642@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>...for any particular Turing machine there are certain
>statements that the human mind can recognize as true (again with
>the consistency assumption), that the machine cannot recognize
>as true.
>
>Does anyone dispute this?

Yes.  If the human brain is essentially a Turing machine, then for any
particular human (or group of them) there is at least one statement that
he (they) cannot recognize as true.  Not very earthshattering, given that
there are probably lots of complex mathematical theorems which are true
but which no human will ever recognize as true.

--Paul V Torek						torek@umich

ellis@spar.UUCP (Michael Ellis) (10/14/85)

In article <1749@watdcsu.UUCP> dmcanzi@watdcsu.UUCP (David Canzi) writes:
>In article <2298@sjuvax.UUCP> tmoody@sjuvax.UUCP (T. Moody) writes:
>>>>     John Searle, who argues that human cognition (in particular
>>>>     intentionality and meaning) is caused by the powerful biochemical
>>>>     machinery of our brains, is the author of wonderfully cynical plain
>>>>     language attacks against the assertions by big name AI folks that
>>>>     human cognitive states are digitally simulatable. [Ellis]
>>>> 
>>>> 	As long as it only simulates the
>>>> 	formal structure of the sequence of neural firings at synapses, it
>>>> 	won't have simulated what matters about the brain, namely its causal
>>>> 	properties, its ability to produce intentional states. [Searle, quoted
>>        by Ellis]
>
>"formal structure... causal properties... intentional states..."
>Plain language?

    Remember, that was only a short extract. His  "Minds, Brains, Programs"
    does explain these concepts quite clearly. I have been working on
    a followup article to reply to several questions that have appeared in
    this newsgroup about Searle's ideas, but I have been delayed somewhat
    since my copy of this book is on loan.

    In the meantime, I refer you to Todd Moody's excellent articles on
    the same subject.
    
    One misconception several have had is that Searle has attempted to
    PROVE the inability of a Turing Machine to cause cognitive states.
    What he has attempted to do, rather, is to argue that this is 
    quite unlikely, IF we assume that:
    
        Mental states are REAL PHYSICAL phenomena that are caused by the brain
        (much as say, surface tension in fluids is caused by molecular,
	atomic and quantum interactions).
       
    BTW, the behaviorist dogma denying reality to mental states is a
    result of western dualism. 
    
    Platonism and Christian scholasticism rejected all knowledge not
    derivable from logic and noumenal experience, and was thus unable to
    create scientific theories.

    Science has rejected all knowledge not derivable from logic and
    phenomenal experience, and has thus been unable to create any sensible
    theories of mind. Note that eliminative behaviorism asserts that mind
    is not even a physical thing!

    Searle's idea of the mind as a network of intentional contents
    interrelated in a holistic manner and placed squarely in the physical
    world would seem to represent an entirely reasonable solution to the
    ancient mind/body problem characteristic of the western world's
    schizophrenia.
    
    "Intentional states" are Searle's answer to Wittgenstein's famous
    question "If I raise my arm, what is left over if I subtract the
    fact that my arm goes up?". More on this later..

>It doesn't follow from this that the brain can't be simulated.  Storing
>just a representation of the brain on a Turing machine tape is clearly
>misguided.  But the above argument doesn't rule out the possibility of
>storing representations of the brain, and the relevant parts of the body
>and the environment on a Turing machine tape, and simulating all three
>together.  Ie. maybe a Turing machine can simulate the brain, but only
>by simulating the things that the brain interacts with as well.

    Indeed, if dualism is true, maybe a Turing machine can create
    cognitive states.

>The idea that the brain can be simulated does not imply that the brain
>instantiates a Turing machine.  If Searle's argument depends on this
>implication being true, then it is flawed.
>-- 
>David Canzi
>
>There are too many thick books about thin subjects.

     No doubt. Read some BF Skinner someday. How successful has behaviorism
     been? And why have so many new scientific methodologies simply
     ignored behaviorist dogma?

     I admit that I have not done justice to Searle's theories; furthermore
     there are many in AI and Psychology who believe his work to be
     heretical, especially those who see cognition, semantics, etc in formal
     machine terms. Recall too, that the view of living things as pulleys,
     levers, and joints -- modern information processing concepts simply did
     not exist back then. 
     
        "What IS a representation of the brain?"
        "What IS a simulation of the brain?"
        "What DOES interreact inside the brain?"

-michael

berger@aecom.UUCP (Mitchell Berger) (10/16/85)

> My understanding is that Godel's incompleteness theorems prove
> (assuming the consistency of Arithmetic) that no Turing machine
> can possibly simulate the human mind.

What Godel's theorem says is that if one assumes Math to be 
consistent, it must be incomplete. Thus, a Turing machine is
either inconsistant or incomplete. Who ever said the brain is 
consistant or complete? Why is it outside the realm of Turing 
Machines?

                            Micha Berger

Any ideas represented in this posting are
purely the author's own, but he won't 
take blame for them either.

tedrick@ucbernie.BERKELEY.EDU (Tom Tedrick) (10/16/85)

In article <272@umich.UUCP> torek@umich.UUCP (Paul V. Torek ) writes:
>In article <10642@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>>...for any particular Turing machine there are certain
>>statements that the human mind can recognize as true (again with
>>the consistency assumption), that the machine cannot recognize
>>as true.
>>
>>Does anyone dispute this?
>
>Yes.  If the human brain is essentially a Turing machine, then for any
>particular human (or group of them) there is at least one statement that
>he (they) cannot recognize as true.  Not very earthshattering, given that
>there are probably lots of complex mathematical theorems which are true
>but which no human will ever recognize as true.
>
>--Paul V Torek						torek@umich

I don't understand your argument. I claim that the human mind
cannot be essentially a turing machine. If we assume that a
partcular mind is equivalant to a particular turing machine,
then we immediately get a contradiction, namely there exists
a statement recognizable as true by that human mind which is
not recognizable as true by that turing machine.

Can anyone explain to me what if anything is wrong with my
reasoning?

Thanks very much,

     -Tom
      tedrick@ucbernie.ARPA

tmoody@sjuvax.UUCP (T. Moody) (10/16/85)

In article <10642@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>
>OK, here is a question.
>
>My understanding is that Godel's incompleteness theorems prove
>(assuming the consistency of Arithmetic) that no Turing machine
>can possibly simulate the human mind.
>
>This is because for any particular Turing machine there are certain
>statements that the human mind can recognize as true (again with
>the consistency assumption), that the machine cannot recognize
>as true.
>
>Does anyone dispute this?
>
>   -Tom
>    tedrick@ucbernie.ARPA

[]
     As it turns out, almost *everybody* disputes this, these days.
The position that you mention was made famous by the English
philosopher J. R. Lucas  (I can't recall the name of the paper, but
I'll dig it up and post it if anybody's interested).  Goedel's first
incompleteness theorem states that in any formal system of arithmetic
there are true sentences that are not provable, *in that system*.
This does not entail that they are not provable in *some other* formal
system of arithmetic.  That means that even though Turing machine A
might not be able to "prove" (i.e., mechanically derive) true sentence
S, Turing machine B might be able to (but there will be sentences that
B cannot prove, etc.).

     So, minds might just be Turing machines instantiating *different*
formal systems.  Searle's objections to Turing machine functionalism
are utterly different from Lucas's.  Since Searle's work has been
cited several times by Michael Ellis and myself, I am working on a
succinct presentation of his arguments, which I will post soon.  For
those who have the time, just read Searle's paper "Minds, Brains, and
Programs."  Already a classic, it has been anthologized in Hofstadter
and Dennett's _The_Mind's_I_, and in John Haugeland's _Mind_Design_.
The original paper appeared in volume 3 (I forget which no.) of
_Behavioral_and_Brain_Sciences, along with about two dozen responses
from philosophers and AI people.

     Another excellent source for clean and cogent arguments that are
critical of Turing machine functionalism is Ned Block's "Troubles with
Functionalism" (Minnesota Studies in the Phil. of Science, vol. 9,
1978), anthologized in Block's
_Readings_in_the_Philosophy_of_Psychology_, vol. 1 (Harvard U. Press).
This is an excellent volume.  It also contains Thomas Nagel's "What Is
It Like To Be A Bat" (in Mind's I, too).

     For pro-functionalist arguments, the most readable source is
Dennett's _Brainstorms_.

     But enough is enough.


Todd Moody                 |  {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody
Philosophy Department      |
St. Joseph's U.            |         "I couldn't fail to
Philadelphia, PA   19131   |          disagree with you less."

tedrick@ucbernie.BERKELEY.EDU (Tom Tedrick) (10/16/85)

>> My understanding is that Godel's incompleteness theorems prove
>> (assuming the consistency of Arithmetic) that no Turing machine
>> can possibly simulate the human mind.
>
>What Godel's theorem says is that if one assumes Math to be 
>consistent, it must be incomplete. Thus, a Turing machine is
>either inconsistant or incomplete. Who ever said the brain is 
>consistant or complete? Why is it outside the realm of Turing 
>Machines?
>
>                            Micha Berger

I claim that if we make the consistency assumption, and assume
that the mind is equivalent to a Turing machine, we get a
contradiction in that there are true statements recognizable
by the mind which are not recognizable by the machine.
Maybe I'm wrong but if I am I hope someone can explain to
me why I am wrong.

   -Tom
    tedrick@ucbernie.ARPA

torek@umich.UUCP (Paul V. Torek ) (10/17/85)

In article <10671@ucbvax.ARPA> tedrick (Tom Tedrick) writes:

>>there are probably lots of complex mathematical theorems which are true
>>but which no human will ever recognize as true.
>
>I don't understand your argument. I claim that the human mind
>cannot be essentially a turing machine. If we assume that a
>partcular mind is equivalant to a particular turing machine,
>then we immediately get a contradiction, namely there exists
>a statement recognizable as true by that human mind which is
>not recognizable as true by that turing machine.

Which one?  The statement that is not recognizable by the Turing
machine may be *extremely* complex -- what makes you so damn sure
you could recognize it as true?  Tell me, Tom, is it true that every
even number greater than two is the sum of two primes?  What, you don't
know?  Then you get the point -- I hope.

--Paul V Torek, making flames the old-fashioned way -- earning them.

rlr@pyuxd.UUCP (Rich Rosen) (10/17/85)

>>Ie. maybe a Turing machine can simulate the brain, but ...

> OK, here is a question.
> My understanding is that Godel's incompleteness theorems prove
> (assuming the consistency of Arithmetic) that no Turing machine
> can possibly simulate the human mind.
> This is because for any particular Turing machine there are certain
> statements that the human mind can recognize as true (again with
> the consistency assumption), that the machine cannot recognize
> as true.
> 
> Does anyone dispute this?

I will.  Rudy Rucker offered a simplification of the Godel Theorem in
his book, Infinity and the Mind.  He described a Universal Truth Machine
(built up from a Mathematical Truth Machine, to a Scientific Truth Machine,
etc.)  He puts a proposition, "G", to the machine, that proposition being
"The UTM machine will never find this proposition G to be true."  What is
the UTM's answer?  Can't reach one (if it's smart enough).  If it says
"true", then it is in fact false, and of course vice versa.

A deathblow to the notion of machines being intelligent?  What if we
rephrased the proposition and asked it of Michael Ellis?  The new
proposition, G', would read "Michael Ellis will never say that this
proposition G' is true."  What will Michael's answer be?  Of course, Michael
can say anything he wants to say.  If he was being truthful (for instance,
if his life depending on his giving the right answer), he would be unable
to say either true or false (and thus he'd be in big trouble).  But he
has the option of (in other circumstances) lying, just saying true or
false because he feels like it.  Is it beyond the realm of possibility
for a machine to do the same thing:  to recognize the self-contradictory
nature of the sentence THE SAME WAY WE DO?  And answering "that proposition
cannot be decided", OR (even!) having the capacity of "lying"?  Of giving
the machine MOTIVATIONS for making such choices?

(Sorry for using Michael for this example.  I hope he's not burning up
saying "Norman, coordinate" at this moment.  But then, Michael's never
bothered much with such logic anyway. :-)
-- 
"I was walking down the street.  A man came up to me and asked me what was the
 capital of Bolivia.  I hesitated.  Three sailors jumped me.  The next thing I
 knew I was making chicken salad."
"I don't believe that for a minute.  Everyone knows the capital of Bolivia is
 La Paz."				Rich Rosen    pyuxd!rlr

lambert@boring.UUCP (10/18/85)

(I have missed most of the discussion, since American net philosophy does
not make it to this side of the Atlantic.)

> I don't understand your argument. I claim that the human mind
> cannot be essentially a turing machine. If we assume that a
> partcular mind is equivalant to a particular turing machine,
> then we immediately get a contradiction, namely there exists
> a statement recognizable as true by that human mind which is
> not recognizable as true by that turing machine.

> Can anyone explain to me what if anything is wrong with my
> reasoning?

The following attempt uses a device that is, unless I am mistaken, due to
Quine.

Consider texts (some of which represent statements, such as: "Two times two
equals four" and "`Two times two equals four' is a true statement about
natural numbers", and some of which do not, like "Who? Me?"  and "Don't
`Aw, mom' me".).  Some of these texts contain *internal* quoted texts.  If
T is a text, then let Q(T), or, in words, T *quoted*, stand for another
text, consisting of T put between the quotes "`" and "'". So if T is

    "Two times two equals for",

Q(T) is

    "`Two times two equals for'".

Let SQ(T), or T *self*quoted, mean: Q(T) followed by T.

So if T is

    " contains no digits"

then T, selfquoted, is

    "` contains no digits' contains no digits"

(which is a true statement).

Now consider the text S =

    "`, selfquoted, is not recognizable as true by the mind of Tom',
     selfquoted, is not recognizable as true by the mind of Tom".

S is a statement, and states that some text T, selfquoted, is not
recognizable as true by the mind of Tom.

So can Tom (or his mind) recognize SQ(T) as true, and is SQ(T) true in the
first place?

If Tom can recognize SQ(T) as true, then S is apparently false.  But note
that T is the text

    ", selfquoted, is not recognizable as true by the mind of Tom",

so SQ(T) = S.  So Tom would have recognized a false statement as true.  If
we collectively assume that Tom would never do such a thing, then all of us
non-Toms can now recognize S as true, something Tom can not.

If "Tom" is consistently replaced by "human being", then the argument still
goes through.  Neither I, nor you, or anyone else, can recognize that
statement as true without showing its falsehood (and human fallibility).
We would have to wait for some non-human intelligence telling us it is
true, but although we might believe it, we still could not recognize it as
being true.  (Now we might think that it is false, which may or may not be
quite true, but than it follows again that not all humans can be
infallible.)

This may all seem shallow.  But for me (to take an arbitrary example:-) to
assert that the mind of a fellow human being can recognize something as
true, with the same level of certainty as in mathematical proofs, requires
a rather total understanding of that mind, that, at least for me, is still
lacking.  More so, if I would also have to recognize the infallibility of
that mind (which is all the time an implicit argument).  With my own mind,
I thus far have not succeeded.  I guess it is the same for other people.

What the original reasoning really shows is that if we would, somehow,
construct a Turing-machine description of the workings of our own mind, we
could not with mathematical certainty recognize it as being that.  Neither
can a Turing machine do this for its own construction, or if it can, then
it is either fallible or has glaring defects in its logical power.  Applied
to human beings, the conclusion is not a big surprise.  It does not follow
that human minds are not Turing machines (although their memory tapes seem
not to be infinite:-).
-- 

     Lambert Meertens
     ...!{seismo,okstate,garfield,decvax,philabs}!lambert@mcvax.UUCP
     CWI (Centre for Mathematics and Computer Science), Amsterdam

tedrick@ucbernie.BERKELEY.EDU (Tom Tedrick) (10/18/85)

In article <299@umich.UUCP> torek@umich.UUCP (Paul V. Torek ) writes:
>In article <10671@ucbvax.ARPA> tedrick (Tom Tedrick) writes:
>
>>>there are probably lots of complex mathematical theorems which are true
>>>but which no human will ever recognize as true.
>>
>>I don't understand your argument. I claim that the human mind
>>cannot be essentially a turing machine. If we assume that a
>>partcular mind is equivalant to a particular turing machine,
>>then we immediately get a contradiction, namely there exists
>>a statement recognizable as true by that human mind which is
>>not recognizable as true by that turing machine.
>
>Which one?  The statement that is not recognizable by the Turing
>machine may be *extremely* complex -- what makes you so damn sure
>you could recognize it as true?  Tell me, Tom, is it true that every
>even number greater than two is the sum of two primes?  What, you don't
>know?  Then you get the point -- I hope.

No, I don't get the point. The complexity of the statement is not
the issue. The issue is that humans seem to recognize that certain
formal systems are consistent, but that this consistency cannot
be proved within the system. This mysterious ability to recognize
such things being something lacking in deterministic machines,
I claim there is a distinction between the human mind and any
Turing machine.

Of course, I may be wrong in believing that these formal
systems are consistent. 

jwl@ucbvax.ARPA (James Wilbur Lewis) (10/18/85)

In article <10699@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>
>No, I don't get the point. The complexity of the statement is not
>the issue. The issue is that humans seem to recognize that certain
>be proved within the system. This mysterious ability to recognize
>such things being something lacking in deterministic machines,
>I claim there is a distinction between the human mind and any
>Turing machine.
>
>Of course, I may be wrong in believing that these formal
>systems are consistent. 

Your points seem to be:

(1) Humans can recognize consistency of certain formal systems, and
    machines lack this ability .
(2) There is something mysterious about this ability, and nondeterminism
    has something to do with it; therefore
(3) no Turing machine can be equivalent to a human mind.

You are confusing two issues: reasoning *within* a formal system, and
reasoning *about* a formal system. What is so mysterious about the
latter kind of reasoning? All one needs to do is define a more powerful
system, and then by reasoning within the new system you can show the
incompleteness/inconsistency/whatever of the weaker system.

Of course, the formal system for any given Turing machine is fixed, and
that machine will be unable to 'jump out of the system' to reason
about its own properties.  But we can always design a more powerful 
machine which *will* be able to reason about the weaker one.

Humans are subject to these constraints, too. Consider:
"Tom Tedrick cannot consistently assert this proposition."

I can prove it, but you can't do so and remain consistent.  Does that
make my mind more powerful than yours?  Of course not, because
you can exhibit the obvious proposition which *you* can prove but
*I* can't (assuming I'm consistent! :-) 

Your mention of determinism is irrelevant; humans are just as deterministic
as machines. Unpredictable, perhaps...since we are orders of magnitude more
complex than any machines we know how to build....but subject to the same
laws of physics.

For a fascinating presentation of this and many other topics, check out
any book by Douglas Hofstadter, especially "Godel, Escher, Back: An 
Eternal Golden Braid".

Cheers,

-- Jim 'down with human chauvinism' Lewis
   U. C. Berkeley
   ...!ucbvax!jwl        jwl@ucbernie.BERKELEY.EDU

"Lately it occurs to me
  What a long, strange trip it's been..."

tedrick@ucbernie.BERKELEY.EDU (Tom Tedrick) (10/18/85)

>>I claim there is a distinction between the human mind and any
>>Turing machine.

>Your points seem to be:

>(1) Humans can recognize consistency of certain formal systems, and
>    machines lack this ability .
>(2) There is something mysterious about this ability, and nondeterminism
>    has something to do with it; therefore

[I didn't say anything about nondeterminism, just that the 
turing machines I am talking about are deterministic.]

>(3) no Turing machine can be equivalent to a human mind.

>You are confusing two issues: reasoning *within* a formal system, and
>reasoning *about* a formal system.

[I don't think that is what I am confused about.]

>What is so mysterious about the
>latter kind of reasoning? All one needs to do is define a more powerful
>system, and then by reasoning within the new system you can show the
>incompleteness/inconsistency/whatever of the weaker system.
>Of course, the formal system for any given Turing machine is fixed, and
>that machine will be unable to 'jump out of the system' to reason
>about its own properties.  But we can always design a more powerful 
>machine which *will* be able to reason about the weaker one.

[Yes, this is exactly the point. Exhibit the turing machine that
is claimed to be equivalent to the human mind, and the human mind
can reason about the system in ways impossible within the system.
Thus we contradict the assumption that the machine was equivalent
to the mind.]

>Your mention of determinism is irrelevant; humans are just as deterministic
>as machines. Unpredictable, perhaps...since we are orders of magnitude more
>complex than any machines we know how to build....but subject to the same
>laws of physics.

OK, we at least have a clear point of disagreement. I don't believe
human beings are deterministic. I also don't accept the laws of
physics as absolute. I accept them as an absolutely brilliant
model but not as complete truth. I don't accept the notion that
the human being is just a very complex machine. 

I originally asked whether anyone disputed my claim that the human
mind is not equivalent to a turing machine. After all the negative
response, I would like to change my question to:

*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
 NOT EQUIVALENT TO A TURING MACHINE?*

"Help, I'm trapped in a machine  :-)"

    -Beleaguered and beseiged on all fronts by the upholders of
     the dignity of turing machines, I remain

     -Tom the Human
      tedrick@ucbernie.ARPA

dmcanzi@watdcsu.UUCP (David Canzi) (10/18/85)

In article <10642@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>My understanding is that Godel's incompleteness theorems prove
>(assuming the consistency of Arithmetic) that no Turing machine
>can possibly simulate the human mind.
>
>This is because for any particular Turing machine there are certain
>statements that the human mind can recognize as true (again with
>the consistency assumption), that the machine cannot recognize
>as true.

What Godel proved was that, if the axioms of arithmetic are consistent,
there is some arithmetical statement, G, which can neither be proved
nor disproved by applying the rules of logic to the axioms of
arithmetic.  Ie. neither G nor ~G is a theorem of arithmetic.

Godel proved this by going "outside of" arithmetic, and constructing a
theory about statements and proofs in arithmetic.  Ie.  while the
axioms of arithmetic are statements about numbers, the axioms of this
newly constructed theory are statements about statements and proofs.
He constructed a statement of arithmetic, G, and proved, from the
axioms of his new theory, that neither G nor ~G could be proven from
the axioms of arithmetic.  Even though G couldn't be proven from the
axioms of arithmetic, Godel was able to prove G by reasoning from the
new axioms.

Now, what does this have to do with Turing machines?  No Turing machine
can generate a proof of G from the axioms of arithmetic.  But then
neither can any human.  Godel proved G by using a different set of
axioms.  I don't see any evidence that a Turing machine can't simulate
Godel and do the same.
-- 
David Canzi

There are too many thick books about thin subjects.

jwl@ucbvax.ARPA (James Wilbur Lewis) (10/18/85)

In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>
>>What is so mysterious about the
>>latter kind of reasoning? All one needs to do is define a more powerful
>>system, and then by reasoning within the new system you can show the
>>incompleteness/inconsistency/whatever of the weaker system.
>>Of course, the formal system for any given Turing machine is fixed, and
>>that machine will be unable to 'jump out of the system' to reason
>>about its own properties.  But we can always design a more powerful 
>>machine which *will* be able to reason about the weaker one.
>
>[Yes, this is exactly the point. Exhibit the turing machine that
>is claimed to be equivalent to the human mind, and the human mind
>can reason about the system in ways impossible within the system.
>Thus we contradict the assumption that the machine was equivalent
>to the mind.]

Foo! By reasoning about an equivalent Turing machine, the human mind
is *also* constrained to operate within the system. No fair jumping
out of the system here.  I ask again: what is your basis for claiming
that human reasoning can't be duplicated by a 'mere' machine, at least
in principle? Are you saying that machines are incapable of the kind of
reasoning involved in, say, the proof of Godel's Incompleteness Theorem?

>
>OK, we at least have a clear point of disagreement. I don't believe
>human beings are deterministic. I also don't accept the laws of
>physics as absolute. I accept them as an absolutely brilliant
>model but not as complete truth. I don't accept the notion that
>the human being is just a very complex machine. 
>

I'm not sure why this is relevant. Are you saying the laws of physics
are incomplete (because we don't know them all yet?) Or that certain 
phenomena are inherently inexplicable by ANY laws of physics, a la
religious arguments?  Whatever those laws of physics are, humans and
machines both must obey them. 

>
>     -Tom the Human
>      tedrick@ucbernie.ARPA

-- Jim Lewis, a Lean Mean Computing Machine!
   U. C. Berkeley
   ...!ucbvax!jwl     jwl@ucbernie.BERKELEY.EDU

rlr@pyuxd.UUCP (Rich Rosen) (10/19/85)

>>> My understanding is that Godel's incompleteness theorems prove
>>> (assuming the consistency of Arithmetic) that no Turing machine
>>> can possibly simulate the human mind. [TEDRICK]

>>What Godel's theorem says is that if one assumes Math to be 
>>consistent, it must be incomplete. Thus, a Turing machine is
>>either inconsistant or incomplete. Who ever said the brain is 
>>consistant or complete? Why is it outside the realm of Turing 
>>Machines? [BERGER]

> I claim that if we make the consistency assumption, and assume
> that the mind is equivalent to a Turing machine, we get a
> contradiction in that there are true statements recognizable
> by the mind which are not recognizable by the machine.
> Maybe I'm wrong but if I am I hope someone can explain to
> me why I am wrong. [TEDRICK]

I'll repeat this because it may not have gotten out the first time.
To keep it simple, ask a machine whether sentence 'G' is true, where
sentence G is "You will never say that this sentence, sentence 'G', is true."
The machine cannot answer it (any answer it gives will be incorrect).
A deathblow to the intelligence of machines!  Hurrah!  Great.  Now ask
the same thing of a human being.
-- 
Meanwhile, the Germans were engaging in their heavy cream experiments in
Finland, where the results kept coming out like Swiss cheese...
				Rich Rosen 	ihnp4!pyuxd!rlr	

mj@myrias.UUCP (Michal Jaegermann) (10/19/85)

I am afraid that a lot confusion comes from a simple mix-up (which took
quite a while for logicians to sort out :-) ). When somebody speaks
about Turing Machines, Goedel Theorem and things of that sort truth and
provability is understood >>within confines of a given FORMAL system<<.
You may always give answer to some "unanswerable" questions if you will
get out and look from "outside" (meta-reasoning). In everyday use of
logic and truth we are mixing freely different meta-levels - which
creates a lot of interesting and often funny paradoxes. Which probably
indicates that formal logic and Turing Machines are only quite simple
MODELS of our reasonig and that a human brain is not a Turing Machine
(this goes far beyond mathematics, so I better stop). If you are finding
"Goedel, Escher, Bach" too wordy and muddy. though funny and inspiring,
and you do not want to wade through monographies on formal mathematical
logic then find a book by R. Smullyan with a name "What is a name of
this book?" to find a lot answers and questions related to the problem.
So how is that book really called?
				     Michal Jaegermann
				     Myrias Research Corporation
				     ....ihnp4!alberta!myrias!mj

rlr@pyuxd.UUCP (Rich Rosen) (10/20/85)

>>Your points seem to be:
>>(1) Humans can recognize consistency of certain formal systems, and
>>    machines lack this ability .
>>(2) There is something mysterious about this ability, and nondeterminism
>>    has something to do with it; therefore
>>(3) no Turing machine can be equivalent to a human mind.
>>You are confusing two issues: reasoning *within* a formal system, and
>>reasoning *about* a formal system.

> [I don't think that is what I am confused about.]  [TEDRICK]

If you understand what you're confused about, you're not confused about it. 

> [Yes, this is exactly the point. Exhibit the turing machine that
> is claimed to be equivalent to the human mind, and the human mind
> can reason about the system in ways impossible within the system.
> Thus we contradict the assumption that the machine was equivalent
> to the mind.]
> OK, we at least have a clear point of disagreement. I don't believe
> human beings are deterministic. I also don't accept the laws of
> physics as absolute. I accept them as an absolutely brilliant
> model but not as complete truth. I don't accept the notion that
> the human being is just a very complex machine. 

The only reasons for doing so would be that you either have some evidence
that this is not so, or you simply refuse to believe it because you don't
like that conclusion.  The first possibility (which I doubt is true) would
be reasonable.  The second (which is engaged in by a large number of people
in this very newsgroup) is fallacious.

> I originally asked whether anyone disputed my claim that the human
> mind is not equivalent to a turing machine. After all the negative
> response, I would like to change my question to:
> 
> *IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
>  NOT EQUIVALENT TO A TURING MACHINE?*

I could care less about the exact type of machine that the human mind really
is, but I have no disagreement with the notion that the mind and brain are
represented as some sort of machine.

To throw yet another bone into this mix, I will quote from the oft-misquoted
(at least here) John Searle, from his "Minds, Brains, and Programs":

	    I want to try and state some of the general philosophical points
	implicit in the argument.  For clarity I will try to do it in a
	question and answer fashion, and I begin with that old chestnut of
	a question:

		"Could a machine think?"

	    The answer is, obviously, yes.  We are precisely such machines.

		"Yes, but could an artifact, a man-made machine, think?"

	    Assuming it is possible to produce artificially a machine with
	a nervous system, neurons, axions, and dendrites, and all the rest
	of it, sufficiently like ours, again the answer to the question seems
	to be, obviously, yes.  If you can exactly duplicate the causes, you
	could duplicate the effects.  And indeed it might be possible to
	produce consciousness, intentionality, and all the rest of it using
	some other sorts of chemical principles than those human beings use.

			[ALL THIS, MIND YOU, FROM A "CRITIC" OF AI!]

		"OK, but could a digital computer think?"

	    If by "digital computer" we mean anything at all that has a level
	of description where it can be correctly described as the instantiation
	of a computer program, then again the answer is, of course, yes, since
	we are the instantiations of any number of computer programs, and we
	can think.

		"But could something think, understand, and so on *solely*
		 in virtue of being a computer with the right sort of program?
		 Could instantiating a program, the right program of course,
		 by itself be a sufficient condition of understanding?"

	    This I think is the right question to ask, though it is usually
	confused with one of the earlier questions, and the answer to it is no.

		"Why not?"

	    Because the formal symbol manipulations themselves don't have
	any intentionality...

I think at this point Searle destroys his own argument.  By saying that these
things have "no intentionality", he is denying the premise made by the person
asking the question, that we are talking about "the right program".  Moreover,
Hofstadter and Dennett both agreed (!!!!) that Searle's argument is flawed.
"He merely asserts that some systems have intentionality by virtue of their
'causal powers' and that some don't.  Sometimes it seems that the brain is
composed of 'the right stuff', but other times it seems to be something else.
It is whatever is convenient at the moment."  (Sound like any other conversers
in this newsgroup?)  "Minds exist in brains and may come to exist in programmed
machines.  If and when such machines come about, their causal powers will
derive not from the substances they are made of, *but* *from* *their* *design*
*and* *the* *programs* *that* *run* *in* *them*.  [ITALICS MINE]  And the way
we will know they have those causal powers is by talking by them and listening
carefully to what they they have to say."  Readers of this newsgroup should
take note of how a non-presumptive position is built, and of how someone
quoted right and left in this newsgroup doesn't even agree halfheartedly with
the notions of those quoting him.
-- 
Anything's possible, but only a few things actually happen.
					Rich Rosen    pyuxd!rlr

laura@l5.uucp (Laura Creighton) (10/20/85)

In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
> NOT EQUIVALENT TO A TURING MACHINE?*

Me.  But not for the reasons that you give.  Aristotle propsoed the definiton
``man is a rational animal''.  In recent years we have worked very hard on
the ``rational'' part but not very hard on the ``animal'' part.  I think that
the concept of ``living'' is very important to the concept of ``mind''.

This does not meant htat it is impossible to construct a living turing machine,
but this is not where the efforts in AI have been spent so far.  I fear that
intelligence may be the easy part, and that it is AL (artificial life) which
is the tough one.

Laura Creighton
l5!laura

what's life to an immortal?

-- 
Laura Creighton		
sun!l5!laura		(that is ell-five, not fifteen)
l5!laura@lll-crg.arpa

matt@oddjob.UUCP (Matt Crawford) (10/22/85)

In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>
>*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
> NOT EQUIVALENT TO A TURING MACHINE?*

Sure, I agree with you.  A Turing machine has unlimited memory.
_____________________________________________________
Matt		University	crawford@anl-mcs.arpa
Crawford	of Chicago	ihnp4!oddjob!matt

dim@whuxlm.UUCP (McCooey David I) (10/25/85)

> In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
> >
> >*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
> > NOT EQUIVALENT TO A TURING MACHINE?*
> 
> Sure, I agree with you.  A Turing machine has unlimited memory.
> _____________________________________________________
> Matt		University	crawford@anl-mcs.arpa
> Crawford	of Chicago	ihnp4!oddjob!matt

Matt's reply goes along with my line of thought.  Consider the situation
realistically:  The human mind has a finite number of neurons and therefore
a finite number of states.  So I propose that the human mind is equivalent
to a finite state machine, not a Turing machine.  (I agree with Tom, but
for the opposite reasons).  Note that my comparison does not belittle the
human mind at all.  Finite can still mean very, very large.  The operation
of a finite state machine with a very large number of states is, for humans,
indistinguishable from that of a Turing machine.

				Dave McCooey
				AT&T Bell Labs, Whippany, NJ
				ihnp4!whuxlm!dim or ...!whlmos!dim

cleary@calgary.UUCP (John Cleary) (10/26/85)

> > [Yes, this is exactly the point. Exhibit the Turing machine that
> > is claimed to be equivalent to the human mind, and the human mind
> > can reason about the system in ways impossible within the system.
> > Thus we contradict the assumption that the machine was equivalent
> > to the mind.]
This is a very crucial point in this discussion I think.  This is only true
IF we assume that the human mind that is doing the reasoning is not itself
part of the Turing machine being exhibited.  The problem is that the 
physical boundary about a human is most unclear.  The wiggling of an electron
on Alpha Centauri might via changes in gravitation affect the firing of one of
my neurons and so alter my behaviour.  From this (extreme) example we have to
include the whole universe in the description of the human.  That is anything
which can affect us (and so observable by us) must be included in a complete
description of our behaviour.  The set of all things observable by us (or
potentially observable by us) can validly be called the whole universe.
Unfortunately the whole universe includes all entities that can observe us
and hence reason about us (remember Heisenberg, if it can observe you then
it can affect you).

The interesting thing about digital computers is that we confuse two things,
the actual physical machine and its abstract description.  The physical machine
just like a human needs the whole universe included in it to describe it.
The abstraction (what is described in the manuals) is an approxiamtion only.
It is proably unclear from the abstract description what happens when a high
energy gamma ray passes through the CPU chip.  So I agree with those who
say a digital computer AS DESCRIBED BY A FORMAL SYSTEM cannot have the same
status as a human.  However there is no reason we know of at the moment why
a physical system cannot, indeed as the description of the physical computer
includes the whole universe and the humans in it, it already has the same 
status as the human.

This then raises some fascinating questions:

	1) Church's thesis that all computers are equivalent to a Turing
	   machine.  This is actually a PHYSICAL law (like law of gravitation)
	   potentially subject to a physical experiment.  It is conceivable
	   for example that some of the pecualiar effects of quantum mechanics
	   could allow calculations faster than any possible Turing machine.

	2) Is the entire universe a Turing machine?

	3) Is it conceivable that any thing part of the universe could
	   verify or refute 2)?

I am also struck by the similarity of the conclusions of some philosophers from
the Eastern tradition that we are all intimately connected 
with the whole universe.


> > I originally asked whether anyone disputed my claim that the human
> > mind is not equivalent to a turing machine. After all the negative
> > response, I would like to change my question to:
> > 
> > *IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
> >  NOT EQUIVALENT TO A TURING MACHINE?*
See above.  I think this is a question for the physicists, and potentially
subject to physicl experiment.
> 

> 		"OK, but could a digital computer think?"
> 
> 	    If by "digital computer" we mean anything at all that has a level
> 	of description where it can be correctly described as the instantiation
> 	of a computer program, then again the answer is, of course, yes, since
> 	we are the instantiations of any number of computer programs, and we
> 	can think.

No I disagree, here he talks about the abstract machine.

> 
> 		"But could something think, understand, and so on *solely*
> 		 in virtue of being a computer with the right sort of program?
> 		 Could instantiating a program, the right program of course,
> 		 by itself be a sufficient condition of understanding?"
> 
> 	    This I think is the right question to ask, though it is usually
> 	confused with one of the earlier questions, and the answer to it is no.
> 
> 		"Why not?"
> 
> 	    Because the formal symbol manipulations themselves don't have
> 	any intentionality...
I agree.
> ...  If and when such machines come about, their causal powers will
> derive not from the substances they are made of, *but* *from* *their* *design*
> *and* *the* *programs* *that* *run* *in* *them*.  [ITALICS MINE]  And the way
> we will know they have those causal powers is by talking by them and listening
> carefully to what they they have to say."

This is a fascinating argument, incorrect I think.  Certainly in humans much
of their abilities come from there experience of the world, learning 
adaptation.  That is much of their state and behaviour is a result of their
experience not their genes.  I suspect any really interesting computer will be
similar. Much of its behaviour will be a result not of its original programming
but of its subsequent experience of the world.  Unfortunatly again to describe
the machines that result we must describe not only their original programming
but all their later possible experiences.  But they can potentially be 
affected by anything in the universe.

The problem with the current state of computing, robotics and AI is that most
computers have little or no interaction with the real world.  They have no 
bodies.  Hence they can to a very good approximatin be described by some formal
system.  Thus many people have a gut feeling that computers are fundamentally
different from humans.  In their guise as formal systems I think this is indeed
true.

I think there is also a practical lesson for AI here.  To get really 
interesting behaviour we need open machines which get a lot of experience of 
the real world.  Unfortunately we arent going to be able to formalize or 
predict the result. But it will be interesting.

Sorry about the length of this, but the question seemed too fascinating to
let alone.

John G. Cleary, Dept. Computer Science, The University of Calgary, 
2500 University Dr., N.W. Calgary, Alberta, CANADA T2N 1N4.
Ph. (403)220-6087
Usenet: ...{ubc-vision,ihnp4}!alberta!calgary!cleary
        ...nrl-css!calgary!cleary
CRNET (Canadian Research Net): cleary@calgary
ARPA:  cleary.calgary.ubc@csnet-relay

tedrick@ernie.BERKELEY.EDU (Tom Tedrick) (10/26/85)

Thanks very much for the responses about the mind-turing machine
problem. They were very interesting and educational. The most
interesting was from our distinguished mathematical colleague
from Amsterdam. I have the highest respect for the Amsterdam
mathematicians (having gone through some of the Lenstras' papers
and heard their talks, for example) so I will defer to his superior
knowledge, and only thank him for taking the time to reply.

I suspect some of the responses were from people not sufficiently
familar with the subject to have an informed opinion, but most
were quite good. I didn't appreciate the responses that treated
the problem as a joke, or subjected me to personal ridicule.
For lack of time I am unable to respond to all the messages I received.

I should mention that I saw a film where Godel said something
to the effect that either mathematics was inconsistent, or
there was some mysterious, not formally explainable process
going on in the human mind. Anyway that was my understanding
of what he said ...

  -Tom

jwl@ucbvax.BERKELEY.EDU (James Wilbur Lewis) (10/27/85)

In article <859@whuxlm.UUCP> dim@whuxlm.UUCP (McCooey David I) writes:
>> In article <10702@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>> >
>> >*IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
>> > NOT EQUIVALENT TO A TURING MACHINE?*
>> 
>> Sure, I agree with you.  A Turing machine has unlimited memory.
>> _____________________________________________________
>> Matt		University	crawford@anl-mcs.arpa
>> Crawford	of Chicago	ihnp4!oddjob!matt
>
>Matt's reply goes along with my line of thought.  Consider the situation
>realistically:  The human mind has a finite number of neurons and therefore
>a finite number of states.  So I propose that the human mind is equivalent
>to a finite state machine, not a Turing machine.  (I agree with Tom, but
>for the opposite reasons).  Note that my comparison does not belittle the
>human mind at all.  Finite can still mean very, very large.  The operation
>of a finite state machine with a very large number of states is, for humans,
>indistinguishable from that of a Turing machine.

Not at all! I see two problems with your line of reasoning.  First, your
assertion that a finite number of neurons --> a finite state machine. This
assumes that neurons have discrete states; however when you consider the
continuous, analog nature of activation thresholds, this argument breaks
down.

A second, *major* flaw is the notion that humans must rely on their brains 
alone for 'storage'.  Ever since the invention of writing, this hasn't been
true; literature can be viewed as a Turing machine tape for humans!

I stand by my claim that minds and Turing machines are equivalent.

-- Jim Lewis
   U.C. Berkeley
   ...!ucbvax!jwl      jwl@ucbernie.BERKELEY.EDU

gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (10/27/85)

> I stand by my claim that minds and Turing machines are equivalent.

Well, we'll never know, will we?  Turing machines are not
physically realizable.

PLEASE stop including "net.math" in postings on this topic.

ins_apmj@jhunix.UUCP (Patrick M Juola) (10/31/85)

In article <10642@ucbvax.ARPA> tedrick@ucbernie.UUCP (Tom Tedrick) writes:
>>Ie. maybe a Turing machine can simulate the brain, but ...
>
>OK, here is a question.
>
>My understanding is that Godel's incompleteness theorems prove
>(assuming the consistency of Arithmetic) that no Turing machine
>can possibly simulate the human mind.
>
>This is because for any particular Turing machine there are certain
>statements that the human mind can recognize as true (again with
>the consistency assumption), that the machine cannot recognize
>as true.
>
>Does anyone dispute this?
>
>   -Tom
>    tedrick@ucbernie.ARPA

Gack!!!

	Tom, leave the mathematics to the mathematicians! :-)  Your second
paragraph is true but does not imply the first.  Godel's Incompleteness 
Theorem states that for any formal system powerful enough to contain arith-
metic, either it is incomplete OR inconsistent.  So, first of all, there
may be a Turing machine that will (correctly) recognize any true statement;
in fact, I will design one right now -- writeln ('this is a true statement.')
(send royalty checks to ins_apmj@jhunix :-)

	The problem, of course, is that this machine will say that ANY
statement is true, but Godel's theorem doesn't prevent this.

	Secondly, human minds (assuming they are powerful enough to do
arithmetic, which is a big assumption in a few cases :-) are equally subject
to Godel's theorem.  In other words, if there is a statement that the Turing
machine will not recognize as true but a human can, there is also one that
a T. machine WILL recognize and a human cannot.  Case in point :

	"Tom cannot consistently assert this statement."

	I think you will have no trouble recognizing this as a true statement,
but to state that this statement is true puts you in the same position as 
saying "Lawyers never tell the truth; I am a lawyer."

	I believe the philospher Lucas espoused this argument, and Doug
Hofstadter trashed it quite thouroughly in _Godel, Escher, Bach_, which I
recommend if you can get through it.
 						Pat Juola
						JHU Math Dept.

--
Note : I am schizophrenic, and this was written by my other personality; I
       assume no responsibility for its content.