[comp.ai.philosophy] Memory

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (11/06/90)

lev@suned0.nswses.navy.mil (Lloyd E Vancil) writes:

>In article <1990Oct27.070636.4144@wam.umd.edu> reh@wam.umd.edu (Richard E.
>	Huddleston) writes:
>>If we can't define consciousness (not that I'm so sure of that), we can at
>>least study it by it's leavings: memories.  Anything that can _remember_ is
>>in some way conscious.  Perhaps the problem with defining consciousness 
>>is similar to defining life; it doesn't have just one form or one aspect.

>Too simple.  My computer remembers, better than I do, and I'm 99.999% sure it's
>not conscious.  I think your second comment was closer to the mark...

   You're defining memories as something too simple, and on a level our "minds"
have no access to.  The information states contained by molecular structures in
our bodies is also *very* precise, and has a better memory than "we" do, but
we have no access to it.  Simple addressable memory would have little use to
an intelligent entity, as a content-addressable memory would be needed (plus
other addressability requirements) at the absolute least.  Our memories are
dynamically connected with our behaviors and methods of thought, but how
much is that in the case of our computers (I'm implying a level very
primitive from the intelligence-producing point of view)?  A computer, without
proper software and the interconnectivity of the "memories" (which I doubt
more and more could be done with the state of technology we have now), no more
has a "memory" in the intelligent sense than a book does.

   Erich

     /    Erich Stefan Boleyn     Internet E-mail: <erich@cs.pdx.edu>    \
>--={   Portland State University      Honorary Graduate Student (Math)   }=--<
     \   College of Liberal Arts & Sciences      *Mad Genius wanna-be*   /
           "I haven't lost my mind; I know exactly where I left it."