[comp.ai.philosophy] Cognitive viruses

markh@csd4.csd.uwm.edu (Mark William Hopkins) (04/19/91)

In article <1991Apr18.120150.10001@santra.uucp> krista@niksula.hut.fi (Krista Hannele Lagus) writes:
>Yeah, I've been trying this self-awareness....and I can be at most 3 
>times aware of being aware of myself...

Beware of going to deep into self-awareness.  If a certain twist takes place,
the most pernicious cognitive virus will occur that will cause temporary
insanity (until the brain self-adjusts).

I don't want to give any more details than that...

shafto@ils.nwu.edu (Eric Shafto) (04/20/91)

In article <11144@uwm.edu>, markh@csd4.csd.uwm.edu (Mark William Hopkins) writes:

> In article <1991Apr18.120150.10001@santra.uucp> krista@niksula.hut.fi (Krista Hannele Lagus) writes:
> >Yeah, I've been trying this self-awareness....and I can be at most 3 
> >times aware of being aware of myself...
> 
> Beware of going to deep into self-awareness.  If a certain twist takes place,
> the most pernicious cognitive virus will occur that will cause temporary
> insanity (until the brain self-adjusts).
> 
> I don't want to give any more details than that...


Wait, I don't get it.  All you have to do is to be aware of yourself
being aware of yourself, (OK, got that) Now, I can see myself doing
that, so I must be aware of that, that's three.  Now, I'm obiously
aware of this, so that would beeeeheeheeheehee.  BLADAYADAYADAYADABOO!
SKEEBLEDY-BLEEBLEDY BPTHTH. AAAHAHAHAHAHAHAHAHAHAHAHA!

What was I saying?

-- 

***************************************************************************
*Eric Shafto             * Sometimes, I think we are alone.  Sometimes I  *
*Institute for the       * think we are not.  In either case, the thought *
*    Learning Sciences   * is quite staggering.                           *
*Northwestern University *     -- R. Buckminster Fuller                   *
***************************************************************************

jet@karazm.math.uh.edu ("J. Eric Townsend") (04/24/91)

In article <11144@uwm.edu> markh@csd4.csd.uwm.edu (Mark William Hopkins) writes:
>In article <1991Apr18.120150.10001@santra.uucp> krista@niksula.hut.fi (Krista Hannele Lagus) writes:
>>Yeah, I've been trying this self-awareness....and I can be at most 3 
>>times aware of being aware of myself...
>
>Beware of going to deep into self-awareness.  If a certain twist takes place,
>the most pernicious cognitive virus will occur that will cause temporary
>insanity (until the brain self-adjusts).
>
>I don't want to give any more details than that...


Read _Synners_, by Pat Cadigan.  Lots of AI virus fun. 'nuff said.



--
J. Eric Townsend - jet@uh.edu - bitnet: jet@UHOU - vox: (713) 749-2120
Skate UNIX or bleed, boyo...
(UNIX is a trademark of Unix Systems Laboratories).

krista@wonderwoman.hut.fi (Krista Hannele Lagus) (04/25/91)

I wrote:
>Yeah, I've been trying this self-awareness....and I can be at most 3 
>times aware of being aware of myself...

My point (which I'm sure was not very obvious) was to question
our goal and motives in determining whether computers can be 
self- conscious/aware.   Are we just always looking for a
discriminating factor that shows the difference between human
consciousness and that of a computer, so that we can say that
computers are not self-aware or intelligent whichever attribute is in
question?   For there will always be *differences* between computers
and humand, I would suspect.  It seems to me that often those who
oppose the self-awareness (or anything else) of computers, require
from computers everything that is required from human intelligence,
plus all the infinite awareness stuff etc. Also, we will always come
up with things that the computer "can't do and thus it can't be this
and that, it is worse than people in some respect".  How about
defining once and for good what we expect from awareness and sticking
with it? 

The motive for this is, I think, that people have so weak egos that
they cannot accept something manmade to be above them in any
"important" respect. 

selmer@hpcuhc.cup.hp.com (Steve Elmer) (05/02/91)

Has anybody ever invented a Turing test for self-awareness? 

Adding an elment of the double blind test would also increase my confidence
in the results.  If none of the subjects (including the computer(s)) were
aware of the nature of the test, I would think the results could be considered
reliable.

Once such a test was established and the rules were made static, perhaps we
could at least measure our progress.

(Just a thought :)

G.Joly@cs.ucl.ac.uk (Gordon Joly) (05/10/91)

Steve Elmer writes:
 > Has anybody ever invented a Turing test for self-awareness? 
 > 
 > Adding an elment of the double blind test would also increase my confidence
 > in the results.  If none of the subjects (including the computer(s)) were
 > aware of the nature of the test, I would think the results could be considered
 > reliable.
 > 
 > Once such a test was established and the rules were made static, perhaps we
 > could at least measure our progress.
 > 
 > (Just a thought :)

Somebody else said...

|Now,  supposing a system  has been built which  "passes" the test. Why
|not take  the process  one stage  further?  Why not  try to design  an
|intelligent system which can decide whether *it* is talking to machine
|or not?

Is that the same?
Gordon.
____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

   "I didn't do it. Nobody saw me do it. You can't prove anything!"