[comp.ai.philosophy] More Split

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (11/03/90)

mcdermott-drew@cs.yale.edu (Drew McDermott) writes:

>   >I think that this phenomenon is interestingly similar to
>   >the kind of suspension of disbelief which occurs when we are 'taken in'
>   >by a good film, play, book, etc.  We know that the characters are not
>   >real, but can feel, to some extent, as if they were.  I'd put quite a
>   >lot of money on the proposition that this will be the main way people
>   >will interact with computers ....
>Interesting observation; no doubt correct.

   I think that this is what happens in *all* cases, including with people,
we just interpret it differently by social convention and "programming".  And
the "programming" causes a slightly different type of 'taken in' behavior
for different cases.

>But I still maintain that the fuss about "testing for consciousness"
>is misguided...

   I agree.

>one will evaporate.  What will emerge is a good understanding of how
>to manipulate different aspects of consciousness.  So, if you want a
>robot with, say, qualia but no free will, you can have it.

   Are you using "aspects of consciousness" for lack of a better term?

>Let me hasten to add that this scenario is not inevitable.  It *could*
>turn out that a much more radical revision of our conceptual framework
...[deleted]...
>that we never get a satisfactory theory of consciousness, and it

   I would maintain that the concept of "conciousness" is a bad question
to ask, as you mention later.

>   >I'd also like to humbly suggest that future generations of AI workers
>   >will look back with amusement and bewilderment at such arguments as to
>   >whether a machine could be conscious, much as we do at the medieval
...[deleted]...
>I agree, but for somewhat different reasons.

   I think we should start trying that now.  Think of the origonal social
uses of the words "conscious" and "intelligent", or how you use them in normal
social situations now.  Then ask yourself what *real* information are they
meant to convey...  I don't think it is whether a certain neural and/or
algorithmic structure can do similar classes of calculations/mappings/whatever.
I think we are *already* fooling ourselves a bit by using these words.
Hence my push to develop a "language of AI/CogSci/(new name?)".

   So far cleverness and brute force has succeeded, but look at the large
controversies going on, is it just rival theories?  I don't think so...
I think that it is also rival basic conceptual frameworks, and on a deeper
level than most (say, physics, for example) arguments.

   Erich

     /    Erich Stefan Boleyn     Internet E-mail: <erich@cs.pdx.edu>    \
>--={   Portland State University      Honorary Graduate Student (Math)   }=--<
     \   College of Liberal Arts & Sciences      *Mad Genius wanna-be*   /
           "I haven't lost my mind; I know exactly where I left it."