[comp.ai.philosophy] forwarded post

cpshelley@violet.waterloo.edu (cameron shelley) (11/06/90)

The following is a post forwarded by me for Stephen Miller at NASA
whose system seems to have posting trouble.  Please direct responses
to him at the address below...


From smiller@aio.jsc.nasa.gov  Mon Nov  5 11:48:14 1990
Received: from aio.jsc.nasa.gov by violet.waterloo.edu with SMTP
	id <AA18512>; Mon, 5 Nov 90 11:48:14 EST
Received: by aio (5.57/Ultrix2.4-C)
	id AA07923; Mon, 5 Nov 90 10:46:29 CST
Date: Mon, 5 Nov 90 10:46:29 CST
From: smiller@aio.jsc.nasa.gov (Stephen Miller)
Message-Id: <9011051646.AA07923@aio>
To: cpshelley@violet
Status: R

To: eos!shelby!agate!apple!usc!wuarchive!uunet!ogicse!pdxgate!eecs!erich
Subject: Re: Split from AI/CogSci...  misc. comments
Newsgroups: comp.ai.philosophy
In-Reply-To: <497@pdxgate.UUCP>
Organization: NASA JSC Houston, TX
Cc: 
Bcc: 

In article <497@pdxgate.UUCP> you write:
>
>   After a little more thought on the topic of the last article, It comes
>to me that several comments that I made states part of my argument more
>eloquently.
>
>   (I had noted how our concepts differentiate as we learn a topic, and
>given an example of my own concept differentiation in "AI")
>
>   Our own preoccupation with the messy concepts of "conciousness", and even
>"intelligence" are a testament to our own naivete in a way.  We are learning
>more and more that although some principle issues that get people interested
>in a field are interesting, they end up becoming sort of moot questions as
>the field matures and *better* questions are dicovered.  I think that this
>consideration would be valid here as well in the argument to give "AI" its
>own language.
>
>   Later,
>	Erich
>
>     /    Erich Stefan Boleyn     Internet E-mail: <erich@cs.pdx.edu>    \
>>--={   Portland State University      Honorary Graduate Student (Math)   }=--<
>     \   College of Liberal Arts & Sciences      *Mad Genius wanna-be*   /
>           "I haven't lost my mind; I know exactly where I left it."


my response:
    let's take an elementary spelling course:  original
	(there were some others, too.)
	(it causes loss of credibility...)
    content:
	i like it, however, do you think there is an emotional/instinctive
element in perceiving concsiousness in another entity?  certainly our 
need (emotional, instinctive, whatever type) has something to do w/it. 
	i for one think a lot will come to light when the first generation
of babies grows up w/intelligent-acting computer/robot systems freely 
available.  that is, when these systems are in the nuturing environment
such as at home and at day school.  will the children grow up considering
"robbie" to be a companion and playmate, a significant other persona?  or
will the children say "oh, it's just the computer", and not become attached
to it as they do to their human friends and playmates?
	...will there be an emotive element?  if there is, will the children
think the systems conscious?
	i think it is a tremendously complex question, this one about 
machine consciousness, and a tremendously important one.  it has bearing
on all kinds of ethical and moral issues, not to mention many different
branches of scientific inquiry.  like all inquiry, and revolutionary 
scientific (or other) thought and ideas, it will take both completely 
NEW ways of looking at things (witness Heisenberg, Einstein, Darwin,...)
and more than one person brave enough to think these new things/ways, 
and (lastly) time.  revolutionary ideas neither form nor become accepted
overnight.  
	i maintain what we are witnessing and contributing to is a completely
new way of conceiving (for western man, at least) of our selves, our machines
and technologies, and our other things in our universe;  i.e. other intell-
igences, including not only computer ones but also nonterrestrial ones such
as ocean-going mammals, other life forms on earth, and extra-terrestrials.
	i think it is a fundamental revolution in the way we conceive of 
ourselves and our place in the universe.  all the evidence points to this
conclusion.  we are consistently encountering multiple levels of reality
in particle physics, cognition, astronomy, computer science, bioengineering.
the same thing happens if one reads cultural philosophies from a variety
of nonwestern and western cultures.
	...enough for now.  i welcome comments and responses.
.steve.


--
      Cameron Shelley        | "Fidelity, n.  A virtue peculiar to those 
cpshelley@violet.waterloo.edu|  who are about to be betrayed."
    Davis Centre Rm 2136     |  
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (11/06/90)

cpshelley@violet.waterloo.edu (cameron shelley) writes:

>The following is a post forwarded by me for Stephen Miller at NASA
>whose system seems to have posting trouble.  Please direct responses
>to him at the address below...

>In article <497@pdxgate.UUCP> you write:  (Erich Boleyn)
[stuff deleted]

>my response:
>    let's take an elementary spelling course:  original
>	(there were some others, too.)

   (Sorry, when I get excited, my ability to communicate degrades quickly,
usually starting with my spelling ;-)

>	(it causes loss of credibility...)

   Bummer...  but true.  It is a testament to the need of nicely formed
arguments instead of valid content in any field.  (sigh)

>    content:
>	i like it, however, do you think there is an emotional/instinctive
>element in perceiving concsiousness in another entity?  certainly our 
>need (emotional, instinctive, whatever type) has something to do w/it. 

   I am absolutely sure it does.  There was an excellent posting made not
long ago (a week or two (?)) which discussed this point.  I think it has to
do both with our innate social needs/wants, and with our own conceptual
structure that we try to fit the behavior of the devices into (or, for that
matter, look at how many people who work and/or live closely with animals
anthropomorphize them to some extent...).

>	i for one think a lot will come to light when the first generation
>of babies grows up w/intelligent-acting computer/robot systems freely 
>available.  that is, when these systems are in the nuturing environment
>such as at home and at day school.  will the children grow up considering
>"robbie" to be a companion and playmate, a significant other persona?  or
>will the children say "oh, it's just the computer", and not become attached
>to it as they do to their human friends and playmates?
>	...will there be an emotive element?  if there is, will the children
>think the systems conscious?

   I think the question will have an interesting answer...  but probably
not one that you listed.

>	i think it is a tremendously complex question, this one about 
>machine consciousness, and a tremendously important one.  it has bearing
>on all kinds of ethical and moral issues, not to mention many different
>branches of scientific inquiry.  like all inquiry, and revolutionary 
>scientific (or other) thought and ideas, it will take both completely 
>NEW ways of looking at things (witness Heisenberg, Einstein, Darwin,...)
>and more than one person brave enough to think these new things/ways, 
>and (lastly) time.  revolutionary ideas neither form nor become accepted
>overnight.  
>	i maintain what we are witnessing and contributing to is a completely
>new way of conceiving (for western man, at least) of our selves, our machines
>and technologies, and our other things in our universe;  i.e. other intell-
>igences, including not only computer ones but also nonterrestrial ones such
>as ocean-going mammals, other life forms on earth, and extra-terrestrials.
>	i think it is a fundamental revolution in the way we conceive of 
>ourselves and our place in the universe.  all the evidence points to this
>conclusion.  we are consistently encountering multiple levels of reality
>in particle physics, cognition, astronomy, computer science, bioengineering.
>the same thing happens if one reads cultural philosophies from a variety
>of nonwestern and western cultures.

   I somewhat agree with what you say, but there are some points that puzzle
me in the implications.  Up until recently, our philosophical foundations have
gone fairly untouched (the argument of Newtonian vs. Quantum mechanics
notwithstanding), i.e. we still use the same philosophical concepts that have
been used for a *long* time, with little change.  Even before they were
formalized, these concepts were used in a loose way in social interaction and
classification of other humans.  I think the boundaries of "AI" have been
enroaching on the "final fortress" of the old concepts, and it is looking
like that may fall, and with it the revolution you speak of will take place.

   What I am curious about is what form some of these new concepts may take.
I am also curious about how "fundamental" they are.  We use the somewhat
naive terms "conciousness" and "intelligence" in our social discourse, but
we seem so weded to them, is there a reason for it?  It very well could be
that our whole society contributes to this in a way.  I sometimes wonder if
our emotional states could be as much a hindrance to the progress of knowledge
as anything else...  *plus* they ground us in a specific framework.

   Look at the response of the fundamentalist christian movement to the
progress in science.  It wasn't until the beginning of the 20th century that
they even existed.  To me, it looks like a fear response to the pressure of
newer scientists that were claiming more and more that religion was
unnecessary.  The mysticism movements on the surface appear similar to some
of this, although different in their own way.  It appears to be a combination
of new evidence and fear to be called "just a machine" (at least on the
surface...  I admittedly have little experience with most of them), and a
lack of understanding of the materialistic sciences, in some cases, I'm sure.

   I wonder how much our own innate concepts given by culture can cloud
understanding of some things...

   Oh, well, just a few thoughts.

   Erich

     /    Erich Stefan Boleyn     Internet E-mail: <erich@cs.pdx.edu>    \
>--={   Portland State University      Honorary Graduate Student (Math)   }=--<
     \   College of Liberal Arts & Sciences      *Mad Genius wanna-be*   /
           "I haven't lost my mind; I know exactly where I left it."

smiller@aio.jsc.nasa.gov (Stephen Miller) (11/09/90)

as to the (sigh) need for form before content, this is "not necessarily
so"!  it's just that natural language is our (your-mine) medium of 
expression and communication here.  in order to communicate affectively,
we need to follow the rules.  rules of grammar and syntax are some of 
those.  by not knowing (or following) the communication rules, ones 
arouses preliminary questions in some receivers' minds.  (at least in mine.)
    well, it was no big thing, just a minor aggravation.

now to the content (which i think we both agree is the important thing!):
    thank you for your comments.  i do, however, think the changes i'm
talking about  ARE fundamental.  as witness, your comment "has our
emotive base slowed down our scientific understanding?"  [paraphrased.
if i got it wrong, just correct me.]  my point exactly is this:  that we
(western man) has been trying to ignore this aspect of reality, along
with others (such as consciousness) for the last 2000 years or more.
(that is, within the physical and "rigorous" sciences.)  this is because
these areas of life and reality do not fit well into empirical frames,
and because these are the HARD questions.  we are getting close(r) to the 
real tough ones now, the ones philosophy has asked since the greeks and 
before.  (the ones like "what are we?"  "do i exist?" "what is matter
(the universe)?" and "where did i (we) come from?")  our physical sciences
(and life sciences) and now mathematical-comp.sci. and so on have 
progressed to the point where we are confronting these very basic
("fundamental") questions and issues. 
    we are reaching the point where it is no longer viable to think of 
ourselves as supreme, different, or separate.  other mammalian intelligence
shows we should be cautious in thinking we are the smartest, cognitive 
science/AI/our discussion subject is evidence that we are confronting the
question of whether we are different or not (through defining boundaries;
i.e. what things are intelligent, have consciousness, and so on), and 
back to the old modern physics findings to show that we are not really
separable from our world/environment.
    thank you for agreeing with me that we are in a revolution of 
thought here, now.  i think it is evolution.  of human consciousness!

.steve.

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (11/09/90)

smiller@aio.jsc.nasa.gov (Stephen Miller) writes:


>    thank you for your comments.  i do, however, think the changes i'm
>talking about  ARE fundamental.

   But a better question might be fundamental with respect to what?
I agree that we (as a social group) are changing, and that the new
concepts are starting to take on forms like those never before.  We
could get fundamental change of the social condition but wash over how
our biological basis cements certain features.

>                                 as witness, your comment "has our
>emotive base slowed down our scientific understanding?"  [paraphrased.
>if i got it wrong, just correct me.]  my point exactly is this:  that we
>(western man) has been trying to ignore this aspect of reality, along
>with others (such as consciousness) for the last 2000 years or more.
>(that is, within the physical and "rigorous" sciences.)  this is because
>these areas of life and reality do not fit well into empirical frames,
>and because these are the HARD questions.

   What I really meant was that our emotive base may well be fundamentally
inefficient for certain things.  Look how it is causing problems for our
society at the population densities that we have now?  I don't think it
has been entirely free of problems in the sciences, however much we chose
to ignore it.  I question how much could we change with this kind of
fundamental structure involved (maybe several levels removed).

   But isn't this ignoring the (very likely) possibility that those questions
could be fundamentally *bad* ones to ask?  One of my points in the origonal
posting on this thread was that the very *words* conciousness and intelligence
can be confusing.  They evolved from a social useage that was useful for
dealing with humans, but I am becoming more and more convinced that they
not only apply badly to the rest of the world, but may be a naive set of
concepts when pursuing it generally.  That's the question I'm asking.

   What if we do find that intentionality is a less efficient method of
representation and/or question asking than some other method (to be
dicsovered, maybe), can we leave enough of the intentional philosophy
behind to be useful?  I wonder...

>                                           we are getting close(r) to the 
>real tough ones now, the ones philosophy has asked since the greeks and 
>before.  (the ones like "what are we?"  "do i exist?" "what is matter
>(the universe)?" and "where did i (we) come from?")  our physical sciences
>(and life sciences) and now mathematical-comp.sci. and so on have 
>progressed to the point where we are confronting these very basic
>("fundamental") questions and issues. 

   I think that the questions too represent a problem (specifically, human
intentionality).  This is an efficient system that we have for dealing with
the universe, but we have no basis for expecting the universe to play our
game when it comes to this.  Science has long been a process where the
first problems in a field were set aside and realized to be bad questions,
or inappropriately concieved.  Reading this group alone suggests the
problems with using these "human-only" (or maybe mammal-only) concepts for
too much more.  There have been heated arguments where people were
practically representing the same side, not to mention so many uses of
the words "conciousness" and "intelligence" that I don't even like to
use the words out of parenthesis any more (I use them like quotes from
books, i.e. to refer to the whole idea of "machines" and the "mind").

>    we are reaching the point where it is no longer viable...

   Yes, I think that a lot of us are already thinking of it in a similar
way.  But that's not the end of it...

>    thank you for agreeing with me that we are in a revolution of 
>thought here, now.  i think it is evolution.  of human consciousness!
				   ^^^^^^^^^

   I think that the *real* evolutionary step happened many thousands of
years ago, by giving the potential for this kind of social evolution.  I
also tend to think we are approaching a kind of critical mass where
*extremely* rapid chages (or even phase transitions, if you like) are
going to occur.

   Erich

             "I haven't lost my mind; I know exactly where it is."
     / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
    { Honorary Grad. Student (Math) }--> Internet E-mail: <erich@cs.pdx.edu>
     \  Portland State University  /  >%WARNING: INTERESTED AND EXCITABLE%<