[net.philosophy] Humans and Turing Machines

ellis@spar.UUCP (Michael Ellis) (10/21/85)

>> I don't believe human beings are deterministic. I also don't accept the
>> laws of physics as absolute. I accept them as an absolutely brilliant
>> model but not as complete truth. I don't accept the notion that the
>> human being is just a very complex machine.
>
>These, however, are *assumptions*.  They are not *proven* by anybody I
>am aware of.

     Electrons, atoms, molecules, biochemical processes, etc..  are not
     causally deterministic mechanisms, at least if the advances in
     knowledge about natural phenomena since 1930 are allowable as evidence.
        
     Therefore, the idea that "our behavior might be deterministic" is
     profoundly unscientific.
     
>By the way, I'd be interested in knowing *why* you "don't accept the
>notion that the human being is just a very complex machine." Do you also
>"not accept" the notion that "the human being is just a primate", or
>that "the human being is just a mammal"?  How about "insects are just
>complex machines"?  "Reptiles are just complex machines"?  "Mammals are
>just complex machines"?  "The (other) primates are just complex
>machines"?
>
>In other words, just what *is* "machine-like" and what is not, and why
>do you draw the dividing line where you do.  I hope you don't think I'm
>being nasty here, I'd really like to know.  I myself don't see any
>definite boundaries here to point to as the reason for definitely
>segregating humans from "machine-like things" (that is, things that
>"merely" follow the "laws of physics").

     The word `machine' is used with many meanings, of course (even
     Searle refers to the brain as a machine that causes mind).
     I do not use the word `machine' this way.

     To many people, machines are things like those our technology has
     produced -- collections of simple inert objects interconnected using
     simple causal relationships, hierarchically constructed so as to
     produce a specific effects in response to associated inputs. The
     machines's goodness is usually related to how precisely that it
     responds to control inputs. Energy efficiency, reliability, modularity
     of design.. are other qualities a `good' machine should possess.

     To my knowledge, animals have some of these qualities. However, their
     designs incorporate many unusual features -- they are autonomous, they
     have minds, feelings, and desires, they anticipate events yet to occur,
     they rely on many nonhierachical design principles (feedback loops,
     interactions between different functional levels) they exist `for
     themselves', they do not lend themselves to understanding in causal
     terms. They are profoundly indeterministic. They evolve.

>> *IS THERE ANYONE THAT AGREES WITH ME THAT THE HUMAN MIND IS PROVABLY
>>  NOT EQUIVALENT TO A TURING MACHINE?*
>
>It would help if you said what proof you are talking about.  If you mean
>"Is Godel's incompleteness theorem such a proof?", the answer is
>"definitely not".
>
>If anybody *does* agree with you that the human mind is *PROVABLY* "more
>powerful than" a general recursive formal system, I'd be interested in
>hearing what they think the proof is.  (In my opinion, God Himself is no
>more powerful than a general recursive formal system :-)
 
    I agree that Lucas's `Goedelization' argument is not very convincing
    in support of the notion that human rationality surpasses that of
    Turing machines. After all, formal logic is something at which
    our digital computers show great promise.

    Where human thinking differs from machine thinking is in its
    fuzzy, holistic, massively parallel characteristics. We not
    only analyze the world in ways that resemble the logical symbolic
    nature of the Turing machine, we also directly experience the world
    without any thinking whatsoever.
    
    Can mental processes really be characterized by a finite number of
    discrete values, transformed from one frame to the next in a recursive
    deterministic way? Who knows? In the past, science has seen us in terms
    of the technology of the age -- as pulleys and gears, then as analog
    electronics, then in terms of stimulus/response and reinforces, and
    now Turing machines.

    These different models have all helped science to progressively add to
    our knowledge, but I doubt that we have come anywhere close to a true
    understanding of ourselves. Basically, we do not have enough of a grasp
    of what mind is to rigorously compare its powers with those of a Turing
    machine.

    Personally, I do not believe that our current models yet possess enough
    descriptive power to explain the qualities of living consciousness. For
    one thing, any reductionistic model that depends on classical causal
    notions is already too weak -- such models cannot even explain the
    workings of simple electrons!

-michael

rlr@pyuxd.UUCP (Rich Rosen) (10/23/85)

>>> I don't believe human beings are deterministic. I also don't accept the
>>> laws of physics as absolute. I accept them as an absolutely brilliant
>>> model but not as complete truth. I don't accept the notion that the
>>> human being is just a very complex machine.  [TEDRICK???]

>>These, however, are *assumptions*.  They are not *proven* by anybody I
>>am aware of.  [?????]

>      Electrons, atoms, molecules, biochemical processes, etc..  are not
>      causally deterministic mechanisms, at least if the advances in
>      knowledge about natural phenomena since 1930 are allowable as evidence.
>      Therefore, the idea that "our behavior might be deterministic" is
>      profoundly unscientific. [ELLIS]

Michael (as always) seems to have an odd idea about what is meant by
"profoundly unscientific".  Lately it seems ANYTHING at odds with his
pet beliefs (of course) fits into that category.  However, what Michael
tries to carry through here are, in fact, the real set of unfounded
assumptions.

>>By the way, I'd be interested in knowing *why* you "don't accept the
>>notion that the human being is just a very complex machine."  ...  [OR]
>>that "the human being is just a mammal"?  How about "insects are just
>>complex machines"? ...
>>In other words, just what *is* "machine-like" and what is not, and why
>>do you draw the dividing line where you do.  I myself don't see any ...
>>reason for segregating humans from "machine-like things" (that is, things
>>that "merely" follow the "laws of physics").

>      The word `machine' is used with many meanings, of course (even
>      Searle refers to the brain as a machine that causes mind).
>      I do not use the word `machine' this way.

"Machine" (unlike certain other terms in vogue in this newsgroup) is a word
with a variety of definitions.  It has come to mean both a human-made artifact
performing a mechanical function, or ANY entity mechanistically operating in
some ordered manner.  Obviously humans are not machines in the latter sense.
(Unless you insist that sexual reproduction allows us to "make" new human
beings, thus making each of us a "human-made artifact...")  Given that (with
this specific exception that I think we can all agree is NOT a common use of
"machine" when referring to human beings or their bodily organs) the former
definition obviously does not apply to the question at hand, we must be
talking about the other.  Unless you are making an arbitrary distinction which
says "a machine is anything that does all these things as long as it's not
a living thing, especially a human being".

>      To many people, machines are things like those our technology has
>      produced -- collections of simple inert objects interconnected using
>      simple causal relationships, hierarchically constructed so as to
>      produce a specific effects in response to associated inputs. The
>      machines's goodness is usually related to how precisely that it
>      responds to control inputs. Energy efficiency, reliability, modularity
>      of design.. are other qualities a `good' machine should possess.

Again, Michael works from assumptions here.  No one has claimed that machines
MUST be "designed" to produce specific effects, or "hierarchically
constructed" for a purpose.  Nor do the qualities of "simple" or "inert"
necessarily hold in the definition.  Michael is introducing those words
(probably) to evoke a "cold" quality to the notion of machine which it
does not necessarily have (except in his own preconceptions and anti-technology
bias).

>      To my knowledge, animals have some of these qualities. However, their
>      designs incorporate many unusual features -- they are autonomous, they
>      have minds, feelings, and desires, they anticipate events yet to occur,
>      they rely on many nonhierachical design principles (feedback loops,
>      interactions between different functional levels) they exist `for
>      themselves', they do not lend themselves to understanding in causal
>      terms. They are profoundly indeterministic. They evolve.

Two sets of things wrong with this paragraph.  The first is the bogus
distinction he makes when he asserts that "their designs incorporate many
unusual (??) features" which he lists above.  Where is it written that these
things do not apply to the definition of machine?  Does it say "machines
do not anticipate events", or "machines cannot use non-hierarchical design
principles"?  The second is the assertion that (because of all this?) they
"do not lend themselves to understanding in causal terms" (???) and "are
profoundly indeterministic" (where that comes from is beyond me).  Don't you
see that Michael is DELIBERATELY defining machine in such a way so that it
MUST exclude things he simply doesn't want to call machines, while at the
same time stating that machines are made up of "inert" (LIFEless???)
elements, so as to bestow a coldness to the notion?  If someone tried
to define human being in a similar way, stating that "humans are members of
the species homo sapiens who believe in Christian principles, have normal (?)
light skin...", I'm sure you would say "Whoa, pardner, hold on a minute!"

>     Where human thinking differs from machine thinking is in its
>     fuzzy, holistic, massively parallel characteristics. We not
>     only analyze the world in ways that resemble the logical symbolic
>     nature of the Turing machine, we also directly experience the world
>     without any thinking whatsoever.
    
It NOW sounds like you are defining "thinking" in much the same way that
you define "machine" above.  So as to exclude a priori anything that (you
claim!) a "machine" cannot (?) do.  Hofstadter, in his rebuttal of Searle's
"Minds, Brains, and Programs" (found in "The Mind's I"), makes the point that
seems very obvious to me:  Who the hell says that such things as parallel
processing, fuzzy reasoning, etc. CANNOT (ipso facto) be incorporated in
a mechanistic entity of human design?  Searle himself says that all these
things have indeed been incorporated in a machine:  the human brain.  But
Ellis insists on NOT calling it a machine.  Let's make up a word, calling
a machine that's not a machine (by Ellis) by the name "globnitz".  Who is
to say that we cannot design a "globnitz" that will have all the necessary
characteristics to perform all these functions?

>     Can mental processes really be characterized by a finite number of
>     discrete values, transformed from one frame to the next in a recursive
>     deterministic way? Who knows?

And, one could say, who cares?  Who says that's the ONLY methodology available
to us for the design of such machines (uh, excuse me, "globnitzes")?

> 	In the past, science has seen us in terms
>     of the technology of the age -- as pulleys and gears, then as analog
>     electronics, then in terms of stimulus/response and reinforces, and
>     now Turing machines.

What has been always will be, is that your argument?  You are specifically
engaging in a ruse here, claiming a very limited definition of machine and
saying that our brains' processes could never fit into *that*, THUS we
can never design something that emulates the processes of the human brain.
But whether we can use the name machine is actually irrelevant!  The question
is can we design such a THING (globnitz)?  There is no reason to assume, as
you and others do, that we cannot.

>     Personally, I do not believe that our current models yet possess enough
>     descriptive power to explain the qualities of living consciousness. For
>     one thing, any reductionistic model that depends on classical causal
>     notions is already too weak -- such models cannot even explain the
>     workings of simple electrons!

Oh, give us a break, Michael.  This tired "conclusion first, hypothesis
second" dogma is redundant.  Your insistence that the human brain does not
fit into the scope of the rest of the universe (so as to make your desired
conclusions "reality") is presumptive in the extreme.
-- 
"to be nobody but yourself in a world which is doing its best night and day
 to make you like everybody else means to fight the hardest battle any human
 being can fight and never stop fighting."  - e. e. cummings
	Rich Rosen	ihnp4!pyuxd!rlr

franka@mmintl.UUCP (Frank Adams) (10/24/85)

In article <608@spar.UUCP> ellis@spar.UUCP (Michael Ellis) writes:
>     To many people, machines are things like those our technology has
>     produced -- collections of simple inert objects interconnected using
>     simple causal relationships, hierarchically constructed so as to
>     produce a specific effects in response to associated inputs. The
>     machines's goodness is usually related to how precisely that it
>     responds to control inputs.

I think you are falling victim to a common misconception here.  Simple inert
objects interconnected using simple causal relationships will only produce
simple machines.  No one (recently) has claimed that the human mind can be
simulated with a simple machine!

However, I would deny that a large computer running a complex program is
"interconnected using simple causal relationships".

Frank Adams                           ihpn4!philabs!pwa-b!mmintl!franka
Multimate International    52 Oakland Ave North    E. Hartford, CT 06108