[comp.ai.philosophy] Introspection

mpdevine@watdragon.waterloo.edu (Michel P. Devine) (02/05/91)

In article <3035@yarra-glen.aaii.oz.au> dnk@yarra-glen.aaii.oz.au (David Kinny) writes:
>What is it that leads you to believe that introspection is one of
>the greatest sources of information about our cognitive processes ?
>I would regard it as a most unreliable source of information about the
>true nature of those processes.  

The objective study of aspects of cognition, while important and possibly 
beneficial, differs substantially from the experience which is by necessity 
purely subjective.  The attempt to separate the observed phenomena from
the observer works well in domains where pure deterministic reasoning is
applicable, but my experience is that people and even animals are 
infuriatingly unpredictable, except at the coarsest level of functioning
(for example, living beings run away from danger) and therefore the 
black box or machine model of life may be inappropriate.

I should preface my comments on this important topic with a little background.
I once held with the AI clan with the fanaticism of the hardcore believer,
a "faith" born of despair at the failure of philosophy and religion to
enlighten me on the true nature of the universe.  Viewing the world as a 
Great Machine is comfortable for the cognoscenti and leads to a feeling
of vast power and (usually) superiority over less illuminated and gullible
masses.  My fall from grace came about with my attempt to make music 
intelligible to computers.  After consulting scores of so-called music
experts, it became increasingly clear that, in contradiction with their claims, 
they were most certainly *not* in the possession of clear-cut rules for music
appreciation.  The resulting angst built over the span of years, a gnawing
suspicion, then a fear, finally terror, that I was WRONG in a basic way. 
I should be careful to point out that I do not repudiate the machine
model in toto, but that I consider it incomplete.  It is true that it is 
possible to assign specialized functions to parts of the brain, but it is 
pure speculation to conclude that we are purely mechanistic.  The destruction 
of my most dearly held beliefs has been cataclysmic but necessary (I should 
note that the process is still ongoing).

Perhaps the collapse of the hierarchy of so-called "Levels of Being" is 
at fault; I think the major weakness of AI research is essentially an 
optimistic and enthusiastic over-simplification of nature.  
I will very briefly present a simple breakdown of the aforementioned LoB: 

1. Material 
2. Life
3. Consciousness
4. Self-awareness

Most "classical" philosophical systems make use of some such structure to
explain nature.  Plants are typically thought to reside at level 2, animals
at level 3 and humans at 4, although it may be argued that dolphins may
also qualify (how would anyone verify or discount this?).  Self-awareness
amounts to separating the program from the programmer: we can change 
our opinions, our beliefs at any time.

The levels represent qualitative differences in thinking ability or 
"intelligence".  For example, the ability of plants to orient their 
photoreceptors with the sun seems to be the action of some kind of
thought, although not necessarily consciousness.  Animals seem to be free
of most neuroses by virtue of their lack of self-consciousness, living
entirely in the present, but without the ability to forecast the effect
of their actions.  My point is that the only reality to which we have direct
access is our own, and even then from a very limited vantage point, namely
we may come to know ourselves, but we shall never truly know anyone else.
It is our predilection as scientists to search for all-encompassing
paradigms, explanations and simplifications.  Assuming that we are nothing 
more than ambulatory formal systems leads to logical contradiction as pointed
out in a previous posting.
f
It seems therefore only rational to conclude that our basic assumption is 
incorrect, or at least dubious.  So, what is the truth?  I don't know, but
I am not satisfied with the purely mechanical answer.  It does not address
the most mysterious aspects of intelligence, namely intuition, creativity
and so on, and therefore it is naive, and in fact degrading.  Who really
wants to believe that he is no more than a complex automaton?  No AI
researcher truly applies such reasoning to himself, his family or co-workers
which thereby indicates a split between belief and action that is most telling..

>It seems to me that introspection produces subjective and unreliable
>information about a tiny subset of our higher level cognitive processes,
>If we could understand the workings of a lizard mind it would be a
>major step towards understanding human cognitive processes.
>

I don't understand how you can relate the lizard mind to the human mind.
I can I know how lizards think, without becoming a lizard?  Certainly,
I can dissect lizards, make a list of parts and try to assemble one
from my kit, but is it possible that I have thereby misplaced the crucial 
ingredient "life"?  On a related note, isn't it interesting that our medical
experts pursue the understanding of life solely from studying dead material?
Perhaps that is why most hospitals are rotten places to go when one is sick...

I think that introspection provides subjective and crucial information about
a process most of us are almost studiously ignoring.  Introspection is the 
*only* way to know for certain what is really going on in your brain, 
whether it agrees with any given theory or not.  
Most of our activity is below our level of awareness, but it is possible 
(I am tempted to say "essential") to become *more* conscious, to acquire
more control.  

I should stress that I have barely skimmed the surface of objections to the 
AI viewpoint regarding its ultimate aim.  There exist many books that one
can consult, many techniques which with to experiment.  I recommend E.F.
Schumacher's "Guide to the Perplexed" as a good starting point, since it
fleshes out my argument and transcends it. Ken Wilber's books are also
useful. 

Michel Devine
-- 
--------------------------------------------------------------------------------
mpdevine@watdragon.waterloo.{edu|csn}            (519) 884-7123 Michel P. Devine
mpdevine@watdragon.uwaterloo.ca				   CS Dept., U. Waterloo
{uunet|utzoo|decvax|utai}!watmath!watdragon!mpdevine       Waterloo, Ont. N2L3G1

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (02/12/91)

mpdevine@watdragon.waterloo.edu (Michel P. Devine) writes:

>I should preface my comments on this important topic with a little background.
>I once held with the AI clan with the fanaticism of the hardcore believer,
>a "faith" born of despair at the failure of philosophy and religion to
>enlighten me on the true nature of the universe.  Viewing the world as a 
>Great Machine is comfortable for the cognoscenti and leads to a feeling
>of vast power and (usually) superiority over less illuminated and gullible
>masses.  My fall from grace came about with my attempt to make music 
>intelligible to computers.  After consulting scores of so-called music
>experts, it became increasingly clear that, in contradiction with their claims,
>they were most certainly *not* in the possession of clear-cut rules for music
>appreciation.  The resulting angst built over the span of years, a gnawing
>suspicion, then a fear, finally terror, that I was WRONG in a basic way. 
>I should be careful to point out that I do not repudiate the machine
>model in toto, but that I consider it incomplete.  It is true that it is 
>possible to assign specialized functions to parts of the brain, but it is 
>pure speculation to conclude that we are purely mechanistic.  The destruction 
>of my most dearly held beliefs has been cataclysmic but necessary (I should 
>note that the process is still ongoing).

   You seem quite bitter about the experience.  Why must a mechanistic
universe have no good explanation of emotions and intuition?  Just because
we don't understand the mechanisms does not mean that it does not exist.

   When I hear explanations that there must be "something more" to the
universe than a mechanism in operation, many seem to also hold an
undercurrent of revulsion to attributing all of the beauty of the world to
"just a machine".  Why must a mechanistic view be ugly?  The term "machine"
has long been synonymous with "unemotional" and "brittle", but it need
not be that way.  My view of the world is (like many other people's)
deeply related to the aesthetics involved.  It has been established that
some of the most efficient dynamically adaptive algorithms for complex
demands are quite simple.  Even neural networks (extremely primitive
compared to the real thing, mechanistic or not ;-) show some really
interesting adaptive capabilities.  My view of a mechanistic world is
extremely *rich*, full of sensuality and imagery of incredible complexity,
with feelings, life, and all the other amenities...  It could even allow
for a God if done right.

   The point of science is more to find consistent generalized models than
to find some ephemeral "truth".  It is always a danger to call a model,
however accurate, the "truth" about the situation.

   *My* point is that, although you could be correct, of what value is
it for science as a whole (or even a subdiscipline like Cognitive Science)
to disregard its models totally...  science's tenet is to discover
empirical relationships.  Now, admittedly, there is a strong debate about
whether cognitive science fits the bill, so to speak, but can it hurt to
try?

>       ...I think the major weakness of AI research is essentially an 
>optimistic and enthusiastic over-simplification of nature.  

   I would agree, but some of us *are* trying to address the problem.

   Cognitive Science started changing the questions asked and the methods
and knowledge used to address these questions...  and the newly emerging
field of Artificial Life asks different questions completely (I made
a posting about 4-6 months back about the problem of assumptions and
bad questions in the field of Artificial Intelligence).

>The levels represent qualitative differences in thinking ability or 
>"intelligence".  For example, the ability of plants to orient their 
>photoreceptors with the sun seems to be the action of some kind of
>thought, although not necessarily consciousness.

   Some extremely simple algorithms for self-adaptive and self-configuring
systems seem to be able to do this quite well.

>                                               ...Animals seem to be free
>of most neuroses by virtue of their lack of self-consciousness, living
>entirely in the present, but without the ability to forecast the effect
>of their actions.

   This is still being worked on, I must admit ;-).

>                ...My point is that the only reality to which we have direct
>access is our own, and even then from a very limited vantage point, namely
>we may come to know ourselves, but we shall never truly know anyone else.

   Yeah?  So what.  Can you truly know what its like to be physically
female (or even socially, that was wierd enough) ?  Believe me, I've thought
about this quite a bit...

   I'm told that many aspects of what I've discussed with some female
friends of mine are quite close (one even gave me a very funny look ;-).

>It is our predilection as scientists to search for all-encompassing
>paradigms, explanations and simplifications.  Assuming that we are nothing 
>more than ambulatory formal systems leads to logical contradiction as pointed
>out in a previous posting.

   This is not necessarily the case.  The current theory claims that our
"logic" resides on a different level than the formal system.  It may well
be that the godel statement that will clog us is a set of patterns.  (it
is well known in medical circles that certain frequencies of strobing
light can send epileptics into convulsions, and I think that you could
engineer the right patterns or stimulus to knock a heathy person out too).
The right kind of shock to one's system can even kill us.  One could then
think of a godel string as a "shock to the system", since how do we know
what form they would take?

>It seems therefore only rational to conclude that our basic assumption is 
>incorrect, or at least dubious.

   Dubious?  Agreed...  I think the current model has problems too, but that
does not mean it has no use...

>                              ...So, what is the truth?  I don't know, but
>I am not satisfied with the purely mechanical answer.  It does not address
>the most mysterious aspects of intelligence, namely intuition, creativity
>and so on, and therefore it is naive, and in fact degrading.

   How does it not address intuition and creativity?  There are some very
interesting models of creativity being worked on right now (I heard of
one being worked on by Hofstadter called a "slipnet"...  most interesting).
I don't know about this being automatically naive...  there have been
suggestions that questions about "intelligence" and "conciousness" being
naive in themselves (I admit it, I am one of the suggesters ;-).

   Degrading?  This sounds a bit like elitism...  why are we better than
the rocks and stars?  We're more interesting to ourselves, I'll grant you,
and we can do neat things, but they are beautiful too.

>                                                           ...Who really
>wants to believe that he is no more than a complex automaton?  No AI
>researcher truly applies such reasoning to himself, his family or co-workers
>which thereby indicates a split between belief and action that is most telling.

   I'm not sure what I believe...  but I fit in the model too...  many
people I know call me "intense" because I find everyone so fascinating,
and am constantly curious about them.  This is also the reason why
I have a tremendous fear of death.  I am going out on a limb here, but
in trying to be what I feel as an honest scientist, I feel one should
think critically about everything, and am therefore an agnostic.  (Atheism
in my view would be an assumption too)

>                 ...On a related note, isn't it interesting that our medical
>experts pursue the understanding of life solely from studying dead material?

   It is an unfortunate carryover from the past...

>Perhaps that is why most hospitals are rotten places to go when one is sick...

   Personally I think that most human technology is very slapstick...  with
little elegance...  and medicine is near the top of my list (make that
barbaric).

>I think that introspection provides subjective and crucial information about
>a process most of us are almost studiously ignoring.  Introspection is the 
>*only* way to know for certain what is really going on in your brain, 
>whether it agrees with any given theory or not.  

   Yes, but so many people have fundamentally disagreed about how the
human mind operates from introspection...  can we rely on their
interpretations for real information that is not just useable in a social
context?  I agree that introspection is useful, but one must be careful,
it may well be that our internal states that we percieve don't well
correspond to what's going on (assuming a mechanistic model, of course ;-),
as it seems a dangerous assumption that we can...  and look how far it
got in the last 4000 years (and that's just the time during recorded history),
and it would be naive to assume that they were stupid, or even that much
less intelligent than ourselves.

   Erich

             "I haven't lost my mind; I know exactly where it is."
     / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
    { Honorary Grad. Student (Math) }--> Internet E-mail: <erich@cs.pdx.edu>
     \  Portland State University  /  >%WARNING: INTERESTED AND EXCITABLE%<

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (02/12/91)

   Oops!  I made a mistake on the part about being female...  (missed
reviewing it before posting, and misinterpreted his statement to boot...
what can I say, I'm embarassed ;-).

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) writes:

>mpdevine@watdragon.waterloo.edu (Michel P. Devine) writes:

>>                ...My point is that the only reality to which we have direct
>>access is our own, and even then from a very limited vantage point, namely
>>we may come to know ourselves, but we shall never truly know anyone else.

   OK, I agree that we may only know ourselves directly.  One theory of
communication is that we can only hint at what lies underneath, and that
is all "communication" is.

   How well do we even know ourselves, though?  Sometimes it takes someone
who is not cluttered by our own internal pressures to tell us things that
we didn't conciously recognize...  and how complete is a model if one
purposely (although, not necessarily intentionally) ignores certain often
quite significant aspects?  And I think that nobody has *no* illusions,
we simply cannot be that well informed.

   This does not also imply that no correspondence exists at all, just that
it is tenuous.  There *does* seem to be some correspondence, and in
some cases the ability to abstractly think about what its like to be
another being (like my example of thinking about what it would be like to
be physically female without ever being so myself, which IMHO, seems to
have worked OK for a first approximation).

   Building a model of someones mind does not necessitate knowing the
internal states by introspection, though, unless checking it for validity
along that route.  Some, like myself, may argue that this could even
cause confusion.

   Erich

             "I haven't lost my mind; I know exactly where it is."
     / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
    { Honorary Grad. Student (Math) }--> Internet E-mail: <erich@cs.pdx.edu>
     \  Portland State University  /  >%WARNING: INTERESTED AND EXCITABLE%<

mpdevine@watdragon.waterloo.edu (Michel P. Devine) (02/14/91)

In article <1574@pdxgate.UUCP> erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) writes:
>Why must a mechanistic universe have no good explanation of emotions 
>and intuition?  

A very good question.  I think the problem lies in assigning meanings.  I 
don't want to give a full expose on this complex matter, but I will briefly
outline my point.  A "sign" is something that has no meaning in and of
itself, but that refers to something else.  For example, our mathematical
language consists entirely of signs; every part is arbitrary and ordered
according to agreed upon rules.  So, it is not possible to refer to something
which has no sign, or for which no sign can be constructed.  In that sense,
our modeling tools, based on math, are necessarily a "closed system".  
The assignment of the signs to something outside the system (for example,
the intuitive idea of the number 2) is a mysterious process happening
within the mind of the observer, and therefore hard to analyse, let alone
detect.  If the universe is mechanistic, then there must be some description
constructed out of signs that captures it, in all its glory.

On the other hand, there are "symbols" which are both signs (pointers to)
and are themselves greater than signs.  In psychological terms, the symbols 
have meaning but the signs do not.  Symbols are things like "father", "water", 
"pain", "mathematician" which are not purely conceptual but much richer 
in content and association.  The problem with the mechanistic view, as I see it,
is that it precludes anything but an arbitrary assignment to the axioms
of nature, i.e, there are no intrinsic meanings since everything is
a denotation.  But what does it denote?  That is where the importance of
establishing a science of symbols comes in.

>   When I hear explanations that there must be "something more" to the
>universe than a mechanism in operation, many seem to also hold an
>undercurrent of revulsion to attributing all of the beauty of the world to
>"just a machine".  

I agree.  I too consider machines beautiful, but perhaps the perception
of beauty is due to something in me that is inherently non-mechanical. 

>   My view of a mechanistic world is
>extremely *rich*, full of sensuality and imagery of incredible complexity,
>with feelings, life, and all the other amenities...  It could even allow
>for a God if done right.

I don't understand that last statement.  Any God would still have to be
around and if the universe is mechanical, why do we need it/Him? and
where is he hiding?  I like your statement "if done right", because it
is familiar to me...although not satisfying since I prefer experience 
to argument.  

>   *My* point is that, although you could be correct, of what value is
>it for science as a whole (or even a subdiscipline like Cognitive Science)
>to disregard its models totally...  science's tenet is to discover
>empirical relationships.  Now, admittedly, there is a strong debate about
>whether cognitive science fits the bill, so to speak, but can it hurt to
>try?
>

Right. One cannot ditch all models because they are necessary.  However,
there *is* a scientific discipline that can answer our questions, but it
isn't CS.  According to Wilber (author of many  fascinating books) a
scientific fact is established by the following procedure:

1. Injunction.     A recipe or procedure is provided.  Following the 
		recipe should result in 
2. Experience.     The person looks at the result of following the 
		instructions, interprets, analyzes and forms a new
		personal world view.
3. Validation.     One consults a group of peers who have acquired 
		the knowledge to verify the truth.  Facts are based
		on agreement with established authority.

So, a mathematical theorem is a procedure by which I can acquire the same
mind-state as the author and thereby verify his claim; similarly for
the physical sciences.  With this very broad definition of the scientific
paradigm, one can attack the consciousness problem.  

Note that all three steps are essential.  To paraphrase Wilber, a person who 
has not acquired the skills necessary to make an informed statement ought to
be ignored.  This may sound harsh, and even non-conformist in an age where
the nation gobbles up the word of the inexpert and uninformed on a daily
basis (I'm thinking of actors and "personalities" on talk shows), but it seems
only reasonable.  By this definition of science, there are many sciences of
the mind that are extremely sophisticated, have a large body of theoretical
knowledge *and* groups of experts for validation.  It is entirely possible
that the Tibetans, the Indian Gurus and Zen Masters (not to mention the
Christian "mystics") have more to say about the mind than CS.  

I admit readily that I don't "buy" the mystic's viewpoint.  However, if I
insist on being scientific, then I *must* carefully consider their arguments,
try out their exercises and find out for myself, whether this is the prevalent
"scientific" (read, conventional) tactic or not.  Besides, I never understood
why a nation like Tibet never developed technology; perhaps they were
concentrating on something more "organic"?

>
>>The levels represent qualitative differences in thinking ability or 
>>"intelligence".  For example, the ability of plants to orient their 
>>photoreceptors with the sun seems to be the action of some kind of
>>thought, although not necessarily consciousness.
>
>   Some extremely simple algorithms for self-adaptive and self-configuring
>systems seem to be able to do this quite well.
>

Yes, but the pattern (algorithm) does not do justice to the plant.  We still
can't build plants, even though we can build systems that behave somewhat
like plants.

>The right kind of shock to one's system can even kill us.  One could then
>think of a godel string as a "shock to the system", since how do we know
>what form they would take?
>

Sorry, I don't understand these statements relating godel strings to 
our systems at all.

>
>   Degrading?  This sounds a bit like elitism...  why are we better than
>the rocks and stars?  We're more interesting to ourselves, I'll grant you,
>and we can do neat things, but they are beautiful too.
>

Absolutely. We are *both* better (I can things they can't) and not better
(we're all the same).

  I am going out on a limb here, but
>in trying to be what I feel as an honest scientist, I feel one should
>think critically about everything, and am therefore an agnostic.  (Atheism
>in my view would be an assumption too)

I disagree; agnosticism is a way of saying that I'm not willing to take 
a stand.  Most agnostics I know are actually atheists who, for social
reasons, prefer not to be singled out.  (I am not criticising since I
have spent a long time as an agnostic, and I may change yet, but I'm not
sure :-).  

>
>   Yes, but so many people have fundamentally disagreed about how the
>human mind operates from introspection...  can we rely on their
>interpretations for real information that is not just useable in a social
>context?  

No, let us not rely on anyone else.  Introspection is taken up for *personal*
discovery.  If something generic surfaces, we shall celebrate but chances
are that the result of introspection is to increase the disparity between
us and the people around us, as we gain more understanding and clarity.
Then, it is almost impossible to communicate directly; but that is not new.
 
Michel
-- 
--------------------------------------------------------------------------------
mpdevine@watdragon.waterloo.{edu|csn}            (519) 884-7123 Michel P. Devine
mpdevine@watdragon.uwaterloo.ca				   CS Dept., U. Waterloo
{uunet|utzoo|decvax|utai}!watmath!watdragon!mpdevine       Waterloo, Ont. N2L3G1

dailey@frith.uucp (Chris Dailey) (02/14/91)

In article <1574@pdxgate.UUCP> erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) writes:
>mpdevine@watdragon.waterloo.edu (Michel P. Devine) writes:
[... much neat material by both authors deleted...]
>>                 ...On a related note, isn't it interesting that our medical
>>experts pursue the understanding of life solely from studying dead material?
>   It is an unfortunate carryover from the past...

I hope you gentlemen are not suggesting human vivisection ...  (Actually,
most any other type of vivisection would disgust me, as well.)

>>Perhaps that is why most hospitals are rotten places to go when one is sick...
>   Personally I think that most human technology is very slapstick...  with
>little elegance...  and medicine is near the top of my list (make that
>barbaric).

Artificial intelligence is at the top of mine.  (Please note humor.)

>>I think that introspection provides subjective and crucial information about
>>a process most of us are almost studiously ignoring.  Introspection is the 
>>*only* way to know for certain what is really going on in your brain, 
>>whether it agrees with any given theory or not.  
>   Yes, but so many people have fundamentally disagreed about how the
>human mind operates from introspection...  can we rely on their
>interpretations for real information that is not just useable in a social
>context?  I agree that introspection is useful, but one must be careful,
>it may well be that our internal states that we percieve don't well
>correspond to what's going on (assuming a mechanistic model, of course ;-),

...and may actually be contrary to what's going on.

>   Erich
>             "I haven't lost my mind; I know exactly where it is."

Appropriate for all of us, I'd say.
--
Chris Dailey   dailey@(frith.egr|cps).msu.edu
    __  __  ___       | "A line in the sand." -- The Detroit News
 __/  \/  \/ __:>-    |
 \__/\__/\__/         | "Allein in der sand." -- me

GONLERA@YaleVM.YCC.Yale.Edu (02/17/91)

    Correction: In my previous post I meant to say "And although I do
    NOT believe that there is a God, neither do I believe that you can
    whip one up using mechanistic equations, etc, etc."
 
  Leroy Gonzalez
  gonlera@yalevm

stephens@latcs1.oz.au (Philip J Stephens) (02/18/91)

  Before I begin, I would just like to mention that I'm quite
uninformed about the current state of AI investigation in regards to
models of consciousness, so some of my questions may be hum-drum to
some.  But hey, you've got to start somewhere!  (And it will be
short...)

Michel P. Devine writes:

>I should be careful to point out that I do not repudiate the machine
>model in toto, but that I consider it incomplete.  It is true that it is 
>possible to assign specialized functions to parts of the brain, but it is 
>pure speculation to conclude that we are purely mechanistic.

  With the theories of Quantum Mechanics and Chaos been so successful
of late (not that I have read enough about them), I am curious to know
whether or not the brain is as susceptible to unpredictability as, for
instance, the weather.  Do neurons always behave in a deterministic
fashion, or do they exhibit random fluctuations?  Could a neural
network amplify these variations in behaviour like a ball bouncing
amongst an array of pegs, never producing the same result from the
same initial conditions?  Just how much is known about living neural
networks, and is it a fair assumption to say that they are purely
mechanical in function?
  I quite often wonder about the rationale behind searching for a
deterministic model of the brain, when so much natural phenomena is
chaotic and inherently unpredictable in nature.  Obviously there are
some quite dramatic philosophical implications that would be raised if
the brain were found to be governed by the laws of Chaos, but I am
often surprised to find that the majority of people find these implications
unsavory rather than potentially revolutionary.
  With the scientific models of the world gradually leaning towards an
acknowledgement of the unpredictable nature of the universe, why do
people still cling so dramatically to mechanistic models of
consciousness?  [I think I already know some of the reasons, but I
want to hear the views of others first].

</\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\></\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\>
<  Philip J. Stephens                ><   "Many views yield the truth."        >
<  Hons. student, Computer Science   ><   "Therefore, be not alone."           >
<  La Trobe University, Melbourne    ><   - Prime Song of the viggies, from    >
<  AUSTRALIA                         ><   THE ENGIMA SCORE by Sheri S Tepper   >
<\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/><\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/>

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (02/19/91)

stephens@latcs1.oz.au (Philip J Stephens) writes:


>Michel P. Devine writes:

>>I should be careful to point out that I do not repudiate the machine
>>model in toto, but that I consider it incomplete.  It is true that it is 
>>possible to assign specialized functions to parts of the brain, but it is 
>>pure speculation to conclude that we are purely mechanistic.

>  With the theories of Quantum Mechanics and Chaos been so successful
>of late (not that I have read enough about them), I am curious to know
>whether or not the brain is as susceptible to unpredictability as, for
>instance, the weather.  Do neurons always behave in a deterministic
>fashion, or do they exhibit random fluctuations?

   From what I have read of research on the matter, there seems to be
quite strong evidence that they behave deterministically by probabilistic
rules.  The properties of neuron firing come from the actions of a large
number of protein gates that let ions through with a certain probability
based on the presense of other proteins or magnetic fields (depending on
which type of gate we are speaking of, there are a large number of types
present).  The overall effect of all of these probabilities seem to be
a fairly deterministic firing ability, but if you want to model it that
closely, it reduces to quantum mechanical problems.

>                                                  Could a neural
>network amplify these variations in behaviour like a ball bouncing
>amongst an array of pegs, never producing the same result from the
>same initial conditions?

   Yes, there has been some work that shows very chaotic results.  The
most interesting results I have heard of are measuring certain patterns
in the human brain (sorry, I can't remember the reference, but I think
that it was on a "NOVA" program sometime in the last year or two).

>                          Just how much is known about living neural
>networks, and is it a fair assumption to say that they are purely
>mechanical in function?

   Whoa.  It seems that you are mixing "chaotic" with "random".  They are
not the same phenomena.  Chaos is fundamentally a study of deterministic
systems...  systems that appear random on the surface, but have well-defined
relationships running them.  Chaotic systems are quite mechanistic, but
they don't seem to follow our idea of 

>  I quite often wonder about the rationale behind searching for a
>deterministic model of the brain, when so much natural phenomena is
>chaotic and inherently unpredictable in nature.  Obviously there are
>some quite dramatic philosophical implications that would be raised if
>the brain were found to be governed by the laws of Chaos, but I am
>often surprised to find that the majority of people find these implications
>unsavory rather than potentially revolutionary.

   If you are speaking of the brain being inherently *random*, I agree that
it would be dramatic.  As for it being chaotic, I think that many people
already believe this.  The arguments about the random/nonrandom nature of
the underlying mechanisms in the brain rage as we speak ;-).  Some people
imply that the "random" parts need to be truly random for the brain to
work as a true intelligence, so that even the stochastic laws on top wouldn't
give rise to intelligence.

   I don't find the idea of randomness so much unsavory as counter-intuitive.
It would be quite interesting, though...  it twists my mind.  Part of the
problem might be that we cling to the macro-scale interpretation of waves,
for instance...  but that's another discussion ;-).

>  With the scientific models of the world gradually leaning towards an
>acknowledgement of the unpredictable nature of the universe, why do
>people still cling so dramatically to mechanistic models of
>consciousness?  [I think I already know some of the reasons, but I
>want to hear the views of others first].

   There is a *lot* of evidence to imply determinism, at least on the
macro-level...  and neurons appear to function on the macro-level, enless you
insist that the quantum fluxations are still present.  If they are truly
random anyway, they so nearly cancel out that it is almost not worth the
effort, and on top of that the stimulus that we give it would introduce
its own "fluxuations", so to speak, as stimulus from the real world has
irregularities in it.  There are certain arguments that I am not addressing,
but I don't think it is the function of science to give up before it tries.

   Erich

             "I haven't lost my mind; I know exactly where it is."
     / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
    { Honorary Grad. Student (Math) }--> Internet E-mail: <erich@cs.pdx.edu>
     \  Portland State University  /  >%WARNING: INTERESTED AND EXCITABLE%<