YLIKOSKI@FINFUN.BITNET (06/29/88)
Return-path: <@AI.AI.MIT.EDU,@MITVMA.MIT.EDU:YLIKOSKI@FINFUN.BITNET> Received: from AI.AI.MIT.EDU by ZERMATT.LCS.MIT.EDU via CHAOS with SMTP id 167780; 27 Jun 88 06:36:23 EDT Received: from MITVMA.MIT.EDU (TCP 2227000003) by AI.AI.MIT.EDU 27 Jun 88 06:36:06 EDT Received: from MITVMA.MIT.EDU by MITVMA.MIT.EDU (IBM VM SMTP R1.1) with BSMTP id 7090; Mon, 27 Jun 88 06:33:03 EDT Received: from FINFUN.BITNET (YLIKOSKI) by MITVMA.MIT.EDU (Mailer X1.25) with BSMTP id 7088; Mon, 27 Jun 88 05:04:31 EDT Date: Sat, 25 Jun 88 21:16 O From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU> Subject: replicating the brain with a Turing machine To: AILIST@AI.AI.MIT.EDU X-Original-To: @AILIST, YLIKOSKI Distribution-File: AILIST@AI.AI.MIT.EDU In AIList Digest V7 #29, agate!garnet!weemba@presto.ig.com (Obnoxious Math Grad Student) writes: >In article <517@dcl-csvax.comp.lancs.ac.uk>, simon@comp (Simon Brooke) writes: >>[...] >>If all this is so, then it is possible to exactly reproduce the workings >>of a human brain in a [Turing machine]. > >Your argument was pretty slipshod. I for one do not believe the above >is even possible in principle. Why? You must / at least should have a basis for the opinion. One possibility I can think of is the dualist position: we have a spirit but don't know how to make a machine with one. Any other Dualists out there? Andy Ylikoski
briscoe-duke@YALE.ARPA (Duke Briscoe) (07/03/88)
From: Duke Briscoe <briscoe-duke@YALE.ARPA> Full-Name: Duke Briscoe Date: Fri, 1 Jul 88 09:55 EDT Subject: Re: replicating the brain with a Turing machine To: AIList@AI.AI.MIT.EDU >Date: Wed, 29 Jun 88 9:26:50 PDT >From: jlevy.pa@Xerox.COM >Subject: Re: AIList Digest V7 #46 replicating the brain with a > Turing machine > >Andy Ylikoski asks why you can't replicate the brain's exact functions >with a Turing machine. First off, the brain is not a single machine but >a whole bunch of them. Therefore "replacing it with a Turing machine" >wouldn't get you there. I think this is not a valid point because a single Turing machine (TM) can simulate the actions of a group of parallel TMs. >Turing machines have an inherent limitation in that they are not >reactive i.e. they are unable to react to the environment directly. On >the other hand, the brain is in direct communication with a number of >input devices (eyes, ears, nose, touch-sense, etc.), all of which are >sending data at the same time. TMs are usually only used as a theoretical tool. If you were actually going to implement one, you could have a multi-track input tape with one tape having an alphabet representing sensory input sampled at an appropriate rate. Issues of real-time response discussed below. >An interesting question is whether the brain's software suffers from the >Church-Rosser problem which is present in functional languages - >basically, you cannot, in a functional language, see that a certain >source of input is empty and later detect input on it. It seems that >this is not so, since we are able to close our eyes and later open them, >seeing again. In a functional program to simulate a brain, you are assuming that closing your eyes equates to closing an input stream, while in fact real optic nerves continue sending information even when the eyes are closed. Even though I have just shown that I think the points above are invalid, I'm still not sure that brain functions can be theoretically modelled by a TM. TMs operate in discrete steps, while material objects act in continuous dimensions of time and space (as far as we know, otherwise perhaps the universe is a giant, parallel Turing-equivalent computer). Assuming reality is continuous, a TM model might closely approximate something material for some period of time, but would eventually diverge. Plus there is the whole problem that any physical TM implementation would have problems such as unavoidable bit errors which would invalidate its exact correspondence to the abstract TM. However, physical implementations, even using non-organic materials, of computers should still theoretically be capable of the same computing powers as organic brains. There just seem to be limitations in using a restricted TM model to prove things about brain computable functions. Maybe an expanded TM model is needed which takes into account physical properties of space-time. Or perhaps the space-time is discrete at some level we have not yet detected, in which case the current plain TM would be adequate. After all, electric charges seem to be discrete. -------