[net.philosophy] Mind as Turing Machine

mangoe@umcp-cs.UUCP (Charley Wingate) (10/27/85)

There are a number of reasons why I doubt that the mind is in fact like a
turing machine.

As James Lewis pointed out, neurons are essentially analog devices; they
typically respond to input *levels* rather than to descrete stimulae.  If
this holds up in general, it means that the state space of the mind is
essentially continuous.  THe brain also has a lot of analog inputs, posing
similar problems with respect to the "tape".

At the neural level, one typically sees random behavior.  Increasing
stimulae tend to increase or decrease the rate or probability of firing, but
the action is distinctly unlike the firing structure of a computer.  State
transitions would seem therefore to be probabilistic.

Lastly, it's certainly clear that we cannot now model even moderately small
portions of the mind through computers.  I think it is reasonable to ask
those who wish to assert the turing machine-ness of the mind need to show
some method by which the mind can be translated into an equivalent turing
machine, even if this translation is computationally infeasible  (which is
indeed likely).  Without such an algorithm, I think there is reasonable
cause not to accept the hypothesis.

Charley Wingate

greg@hwcs.UUCP (Greg Michaelson) (10/29/85)

> There are a number of reasons why I doubt that the mind is in fact like a
> turing machine.
> 
> Lastly, it's certainly clear that we cannot now model even moderately small
> portions of the mind through computers.  I think it is reasonable to ask
> those who wish to assert the turing machine-ness of the mind need to show
> some method by which the mind can be translated into an equivalent turing
> machine, even if this translation is computationally infeasible  (which is
> indeed likely).  Without such an algorithm, I think there is reasonable
> cause not to accept the hypothesis.
> 
> Charley Wingate

We cannot now do X THEREFORE we cannot ever do X
 where X = build a heavier than air flying machine
         = transmute one substance into another
         = model brain behaviour with a computer etc etc etc

mangoe@umcp-cs.UUCP (Charley Wingate) (10/31/85)

In article <667@hwcs.UUCP> greg@hwcs.UUCP (Greg Michaelson) writes:

>> Lastly, it's certainly clear that we cannot now model even moderately small
>> portions of the mind through computers.  I think it is reasonable to ask
>> those who wish to assert the turing machine-ness of the mind need to show
>> some method by which the mind can be translated into an equivalent turing
>> machine, even if this translation is computationally infeasible  (which is
>> indeed likely).  Without such an algorithm, I think there is reasonable
>> cause not to accept the hypothesis.

>We cannot now do X THEREFORE we cannot ever do X
> where X = build a heavier than air flying machine
>         = transmute one substance into another
>         = model brain behaviour with a computer etc etc etc

Well, the correct analogy in the first case is

  X = Build a Flying machine with flapping wings

and in the second case

  X = Transmute a substance using alchemy

which fit well with the third

  X = Model the brain with a VonNeuman machine


I'm not arguing that we can't model the brain with a computer.  I'm just
saying that the efforts of AI researchers tend to indicate that such
computers aren't likely to be like today's machines.  In principle, for
instance, we could build something which had lots of little chips, one for
each neuron.  It's also important to note that this is not merely a
technological question: it is also a statement about the nature of existing
natural technology.  One ordinarily expects such hypotheses to make
experimental predictions which are then put to the test.  Instead, it is
believed in with a kind of religious fervor.

It's perfectly find to persue this hypothesis further.  But there's no
reason for anyone to believe to be a truth.

Charley Wingate

cccjohn@ucdavis.UUCP (John Carlson) (11/01/85)

*** LINES FOR SALE: 50 CENTS WORTH FEEDS FAMILY OF FOUR ***

In article <1996@umcp-cs.UUCP> Charley Wingate writes:

> Lastly, it's certainly clear that we cannot now model even moderately small
> portions of the mind through computers.  I think it is reasonable to ask
> those who wish to assert the turing machine-ness of the mind need to show
> some method by which the mind can be translated into an equivalent turing
> machine, even if this translation is computationally infeasible  (which is
> indeed likely).  Without such an algorithm, I think there is reasonable
> cause not to accept the hypothesis.

Later, in article <2031@umcp-cs.UUCP> he adds:
> 
> I'm not arguing that we can't model the brain with a computer.  I'm just
> saying that the efforts of AI researchers tend to indicate that such
> computers aren't likely to be like today's machines.  In principle, for
> instance, we could build something which had lots of little chips, one for
> each neuron.  It's also important to note that this is not merely a
> technological question: it is also a statement about the nature of existing
> natural technology.  One ordinarily expects such hypotheses to make
> experimental predictions which are then put to the test.  Instead, it is
> believed in with a kind of religious fervor.
> 
> It's perfectly find to persue this hypothesis further.  But there's no
> reason for anyone to believe to be a truth.

	1)  Assume you could design a Turing-like machine equivalent
	    yourself.

	2)  Then you could comprehend all of this machine's actions,
	    because you would know all of it's inputs and outputs.

	3)  Then you could comprehend all of your actions.

	4)  Knowing all of the machine's inputs and outputs is
	    equivalent knowing the universe.

	5)  Then you could comprehend the whole universe.

	6)  I conclude that it would be easiest to model a human
	    being with an analog machine.

	7)  Try replacing "you" with "we", "yourself" with
	    "to a human", and "your" with "a human's".



Here's some questions:

	1)  Are all inputs and outputs equivalent to the universe?

	2)  Can we make something we will never comprehend, that is,
	    a higher intelligence?



John Carlson

lambert@boring.UUCP (11/02/85)

In article <212@ucdavis.UUCP> cccjohn@ucdavis.UUCP (John Carlson) asks:

>	2)  Can we make something we will never comprehend, that is,
>	    a higher intelligence?

Parents can make something they will never comprehend, namely their
children.  Now I understand why human intelligence is ever-increasing.

(Sorry, couldn't resist that.)  Seriously, the incomprehensibility of a
human-made formal system does not imply it is "higher".
-- 

     Lambert Meertens
     ...!{seismo,okstate,garfield,decvax,philabs}!lambert@mcvax.UUCP
     CWI (Centre for Mathematics and Computer Science), Amsterdam

rlr@pyuxd.UUCP (Rich Rosen) (11/02/85)

>>We cannot now do X THEREFORE we cannot ever do X
>> where X = build a heavier than air flying machine
>>         = transmute one substance into another
>>         = model brain behaviour with a computer etc etc etc

> Well, the correct analogy in the first case is
>   X = Build a Flying machine with flapping wings
> and in the second case
>   X = Transmute a substance using alchemy
> which fit well with the third
>   X = Model the brain with a VonNeuman machine
> 
> I'm not arguing that we can't model the brain with a computer. [WINGATE]

The argument is with those who insist that we cannot model the brain
with a machine at all.
-- 
"Mrs. Peel, we're needed..."			Rich Rosen 	ihnp4!pyuxd!rlr	

jbuck@epicen.UUCP (Joe Buck) (11/03/85)

> From: cccjohn@ucdavis.UUCP (John Carlson)
> 	1)  Assume you could design a Turing-like machine equivalent to
> 	    yourself.
> 	2)  Then you could comprehend all of this machine's actions,
> 	    because you would know all of it's inputs and outputs.
> 	3)  Then you could comprehend all of your actions.

Statement 2), even when applied to a much simpler system that a person
(such as a theory of the natural numbers 0, 1, 2, ...), is what Godel
disproved. That is, even though we write down exactly what the rules of
arithmetic are, there are an infinity of statements that we can't determine
the truth of. This is a common fallacy made by people who argue against
machine intelligence: that knowing the inputs and the rules of a machine,
you understand it completely and it can never surprise you. This just isn't
so.

If you personally don't suffer from this limitation Godel discovered (as
some people like to argue) I have a few computer programs I'd like to
have you debug. :-)
-- 
Joe Buck				|  Entropic Processing, Inc.
UUCP: {ucbvax,ihnp4}!dual!epicen!jbuck  |  10011 N. Foothill Blvd.
ARPA: dual!epicen!jbuck@BERKELEY.ARPA   |  Cupertino, CA 95014

greg@hwcs.UUCP (Greg Michaelson) (11/04/85)

> In article <667@hwcs.UUCP> greg@hwcs.UUCP (Greg Michaelson) writes:
> 
> >> Lastly, it's certainly clear that we cannot now model even moderately small
> >> portions of the mind through computers.  I think it is reasonable to ask
> >> those who wish to assert the turing machine-ness of the mind need to show
> >> some method by which the mind can be translated into an equivalent turing
> >> machine, even if this translation is computationally infeasible  (which is
> >> indeed likely).  Without such an algorithm, I think there is reasonable
> >> cause not to accept the hypothesis.
> 
> >We cannot now do X THEREFORE we cannot ever do X
> > where X = build a heavier than air flying machine
> >         = transmute one substance into another
> >         = model brain behaviour with a computer etc etc etc
> 
> Well, the correct analogy in the first case is
>   X = Build a Flying machine with flapping wings

Have you not seen the flying elastic powered plastic pigeons with flapping
wings?

> and in the second case
>   X = Transmute a substance using alchemy
> which fit well with the third
>   X = Model the brain with a VonNeuman machine

So VonNeuman technology = alchemy? Using current chemical/physical theory
it can be proved that alchemical techniques cannot transmute substances. Can you
provide an equivalent proof that VonN machines cannot be used to model
the (admittedly vast) finite state machine inside human skulls?

The form of argument is fallacious. I put in schematic form to try and this
apparent.

> I'm not arguing that we can't model the brain with a computer.  I'm just
> saying that the efforts of AI researchers tend to indicate that such
> computers aren't likely to be like today's machines.  In principle, for
> instance, we could build something which had lots of little chips, one for
> each neuron.  It's also important to note that this is not merely a
> technological question: it is also a statement about the nature of existing
> natural technology. 

Why should 'natural technology' have any relevance for technology in general?

> One ordinarily expects such hypotheses to make
> experimental predictions which are then put to the test.  Instead, it is
> believed in with a kind of religious fervor.

It is actually religion which is affronted by the suggestion that humans are
no more than than protoplasmic automata. Just because people in AI make loony
claims does not mean that the computer simulation of human behaviour is
impossible.

> It's perfectly find to persue this hypothesis further.  But there's no
> reason for anyone to believe to be a truth.

How about some hypothesis to show that it can't be done?
Men on the moon? Nonsense! Travel underwater? Balderdash! Destroy a city at
the push of a button? Gad sir!

mangoe@umcp-cs.UUCP (Charley Wingate) (11/06/85)

In article <2012@pyuxd.UUCP> rlr@pyuxd.UUCP (Rich Rosen) writes:

>> I'm not arguing that we can't model the brain with a computer. [WINGATE]

>The argument is with those who insist that we cannot model the brain
>with a machine at all.

Is it?  Or isn't it really an argument over the kinds of machines which
offer the hope of success?

Charley Wingate   umcp-cs!mangoe

mangoe@umcp-cs.UUCP (Charley Wingate) (11/07/85)

In article <677@hwcs.UUCP> greg@hwcs.UUCP (Greg Michaelson) writes:

>> Well, the correct analogy in the first case is
>>   X = Build a Flying machine with flapping wings

>Have you not seen the flying elastic powered plastic pigeons with flapping
>wings?

Certainly, and those existed back before the Wright Bros. did their thing.
No sign of man-sized versions, though.  Anyway...

>> and in the second case
>>   X = Transmute a substance using alchemy
>> which fit well with the third
>>   X = Model the brain with a VonNeuman machine

>So VonNeuman technology = alchemy? Using current chemical/physical theory
>it can be proved that alchemical techniques cannot transmute substances. Can
>you provide an equivalent proof that VonN machines cannot be used to model
>the (admittedly vast) finite state machine inside human skulls?

My point here was not VonNeuman machines CAN'T do it-- it's that there's a
strong possibility that the V.N. archetecture is simply the wrong mindset
from which to approach the problem, much as flapping wings and alchemy were
to their problems.  Too often the voice I hear from the AI-ists is "V.N. (or
Parallel, or whatever-your-favorite-variation) is the only way we know to
attack the problem, so we will assume that it is the correct way."  The
notion that the mind is a great state machine is, I would contend,
dangerously close to that sort of thinking.  It's conveniently
unfalsifiable, it's patently unmodelable as it stands (2**(10**10)
states!?!), and thus allows you to work indefinitely on the problem without
the inconvenience of being put to the test.  What I don't hear these people
saying, though, is "What are we going to do if it turns out NOT to be like a
giant state machine?"

One of my professors the other night made the claim that everyone should be
a programmer, because that's the only way they are going to get what they
want done on a computer.  He persisted in an analogy between computer
programming and writing.  My personal opinion is that this is going to
acheive the same results as we commonly see with programmers writing
manuals; they supposedly know how to write, but they aren't really competent
to write effectively on any large scale.  But this is a side issue.  My
sociological comment on this is that it illustrates the sort of messianic
light which one commonly sees in the eyes of computer scientists these days.
Programming will change everyone's way of life.  AI will give us new
electronic brains.  It's in some respects similar to the situation at the
beginning of serious investigation into HTA manned flight; plenty of people
thought it was possible, but almost without exception they were wrong about
how it would be brought to pass.

Charley Wingate

stark@sbcs.UUCP (Eugene Stark) (11/08/85)

> > Well, the correct analogy in the first case is
> >   X = Build a Flying machine with flapping wings
> 
> Have you not seen the flying elastic powered plastic pigeons with flapping
> wings?
> 

I recall recently seeing an article concerning a life-size (I can't remember
exactly, but I am reasonably certain >10ft wingspan) mechanical Pterodactyl
model, already demonstrated or to be demonstrated soon.  The mechanical
Pterodactyl propels itself in flight by *flapping its wings*.  If I remember
correctly, the model was developed by Paul McCready & company (of Gossamer
{Condor, Albatross, ...} fame).

							Gene Stark

breuel@h-sc1.UUCP (thomas breuel) (11/09/85)

Below are the answers to two questions that 'biep@klipper.UUCP'
asked. Altogether, you can make many comparisons between the
brain and Turing machines, but such comparisons
will not tell you much about either theoretical or practical
limitations of the human brain.

						Thomas.

----------

** Why is time complexity not a useful measure for comparing
   a Turing machine with a real life architecture?

Turing machines are very nice devices for theoretical considerations.
In a sense, they give the most believable and strict measure of
computational complexity. For real life architectures, the theoretical
benefits of a Turing machine are unimportant. You can't, for example
do accesses to data on a Turing machine in less than O(n), whereas
in real life, even on a serial architecture, you can do them
in essentially constant time.

** Can you define 'Turing equivalent'?

Something is Turing equivalent if it can simulate a Turing machine.
Since a Turing machine does not have a limit on the amount of
information that it can store, anything that can simulate
a Turing machine can also not have a limit on the amount of
information that it can store. The human mind probably has such
a limit (judging from its architecture). Therefore,
the human mind is probably not Turing equivalent.