[comp.ai] Therapy for Carbon-Based Neural Networks and Silicon-Based Machines

jwi@lzfme.att.com (Jim Winer @ AT&T, Middletown, NJ) (06/20/89)

Jim Winer writes:

>  > It would be interesting to put an artificialintelligence into
>  > abreactive crisis. I have no idea how this would be done, or even
>  > what it would mean, given the state of the art -- but it would be
>  > interesting.

Barry W. Kort snaps back:

> I think there was a Star Trek episode in which Kirk gave the machine
> it's "Goedel Sentence", and the machine, realizing the error of its
> ways, turned itself off.

Jim Winer continues:

In an abreactive crisis with a human being, you can actually watch
as the face musculature returns expression to an earlier age and the
voice itself changes in pitch as the words and phraseology go
backwards in time to relive a childhood experience. It can be
disturbing to watch a 35 year adult *become* a seven year old child
with an unlined face and awkward body language -- but the point is
that it happens -- the adult *becomes* the child again in all ways
except physical size.
.P
In an eclectic therapy situation, the abreaction may be triggered by
bioenergetic or gestalt or other means with the express purpose of
causing the "child" to relive an experience *with a different
outcome.* A typical situation might involve an abused child who is
allowed to relive such an experience and then learn that the
father/abuser (or mother/abuser) *then* is not *all men (women)
now.* (Difficulty forming non-abusive relationshis with persons of
the same sex as the abuser is a common symptom.)
.P
Forcing an abreactive state for a machine intelligence would involve
simulating an earlier surround in which there is a highly charged
emotional state. For sequential computing devices, this seems
unlikely. For a neural net this might correspond to a period of
intense negative feedback. Thus, in a complex net, changing a strong
early pattern of response might have interesting effects on the
response to later patterns (if not totally destructive). I wonder if
there would be any behavior changes that might relate to the type of
behavior changes that result from putting a human through an
abreactive crisis?

Jim Winer ..!lzfme!jwi 

May you live in interesting times.
        Pax Probiscus!
        Sturgeon's Law (Revised again): 98.89%
        of everything is peanut butter.
        Rarely able to send an email reply sucessfully.
        The opinions expressed here are not necessarily  
Those persons who advocate censorship offend my religion.

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (06/21/89)

From article <1420@lzfme.att.com>, by jwi@lzfme.att.com (Jim Winer @ AT&T, Middletown, NJ):
> .....
> Forcing an abreactive state for a machine intelligence would involve
> simulating an earlier surround in which there is a highly charged
> emotional state. For sequential computing devices, this seems
> unlikely. For a neural net this might correspond to a period of
> intense negative feedback.....

It shouldn't be that hard.  The therapeutic situation Jim described is
basically what happens when a learning system makes an invalid
generalization.  Suppose it mistakenly concludes that a certain set of
conditions is too dangerous to allow at any time.  Sure, if it is a
simple system you go in and do a manual adjustment, and a man-made
system should be built so you can do that, but suppose for some reason
you couldn't do that.  You would have to force the system to go back
and re-evaluate that supposedly dangerous situation.

> ...... Thus, in a complex net, changing a strong
> early pattern of response might have interesting effects on the
> response to later patterns (if not totally destructive). I wonder if
> there would be any behavior changes that might relate to the type of
> behavior changes that result from putting a human through an
> abreactive crisis?

Yep, I think that's the right question.  I'm suggesting that the
machine would resist therapy at first.  If it consented to review its
assumptions, it might have to unlearn and relearn everything it learned
after it made the incorrect generalization.  It might behave rather
immaturely for a while.  I'm not thinking necessarily of a net, but
more of an expert system that learns from its Q and A's.  To stay sane,
such a system would have to remember when and how it learned what it
thinks it knows.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

dhw@itivax.iti.org (David H. West) (06/22/89)

In article <1617@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
>more of an expert system that learns from its Q and A's.  To stay sane,
>such a system would have to remember when and how it learned what it
>thinks it knows.

This is a well-studied problem under the name of "truth
maintenance", and several usable algorithms exist to solve it.