[sci.virtual-worlds] personal simulations

fhapgood@world.std.com (Fred Hapgood) (08/21/90)

                 
19th century photography is stills; 20th is film or video.  A 21st
century 'photograph' would be a highly realistic computer
animation that captured the essence of the person but could be
inserted into new (animated) situations.

The animation would be generated by voice, movement, appearance,
and discourse synthesizers all running in parallel.  Each would be
tuned to the style characteristic of the subject in that domain:
his pitch and tones, phonemic pronunciation, physiogomy, gait,
posture, body language, and conversational habits.  I grant you
some of the software problems here are non-trivial, but nothing
in that list is technically _impossible_, right?

Once you had this animation you would be able to 'test' it by
placing it in different situations, different animated contexts
and seeing how it would act and react.  Eventually, after many
years of development, these reactions would become more and more
true-to-life.  The animation in the computer would seem the
psychic mirror-image of the physical original.  This would be the
21st century photograph.

Some questions: will these become real ghosts, keepsakes of the
dearly departed?  Will there be people who see them as gateways
to eternal life, eternal youth?  Will they become the center of a
world religion??  Will the Mormons announce a project to develop
and store animations of all their members?

Will there be an issue of misusing animations?  Of programming
your boss to be gang-raped by a gang of street thugs?  Or of
living a dream life as one of the talented, famous, rich, and
beautiful?  Will people get off on watching themselves make
passionate love with the most desireable sex objects on the
planet?

What kind of relationship will spring up between people and their
dataganger?  Will the original and the copy grow apart as they
process their different experiences? Or will the virtual-reality
technologies allow people to join their dataselves behind the
screen so we will have experiences in common?

Will the dataganger become the interface of choice for all
computer functions (if I don't hear from myself I won't believe
it)?

The mind boggles.

cphoenix@csli.Stanford.EDU (Chris Phoenix) (08/24/90)

In article <1990Aug21.120733.2752@world.std.com> fhapgood@world.std.com (Fred Hapgood) writes:

>The animation would be generated by voice, movement, appearance,
>and discourse synthesizers all running in parallel.  Each would be
>tuned to the style characteristic of the subject in that domain:
>his pitch and tones, phonemic pronunciation, physiogomy, gait,
>posture, body language, and conversational habits.  I grant you
>some of the software problems here are non-trivial, but nothing
>in that list is technically _impossible_, right?

It sounds like you're talking about something that can pass the Turing test.
Maybe not here, but the rest of the article leans further and further in 
this direction.  It's a neat idea, and I'm sure people will try to do it,
but the part about conversational habits alone may be impossible.

>Will there be an issue of misusing animations?  Of programming
>your boss to be gang-raped by a gang of street thugs?  Or of
>living a dream life as one of the talented, famous, rich, and
>beautiful?  Will people get off on watching themselves make
>passionate love with the most desireable sex objects on the
>planet?

There's a definite problem here.  You're talking about not only programming
simulations to act like you in some settings, but to act like your boss in
totally unexpected settings.  Take the issue of making love.  Either 1) you'd
have to have a good enough model of human physiology and mentation that the 
computer can decide how to act when presented with such an unusual setting,
2) you'd have to let the computer watch you making love (in which case the 
simulations you let other people see would probably be *very* limited) or 
3) if people are similar enough, you could have one "making-love" paradigm 
that could be plugged into any simulation.  Note that this is different from
1, in that you can "hard-code" such a paradigm rather than making the computer
extract it from its modelling of humans--but it would be very impractical to
hard-code paradigms for each situation, for each person, so people would have
to be similar enough that one paradigm produced a believable simulation no 
matter which person it was run for.  IMHO, 2) is the only way it will work.
But if you want to have anything better than a computerized video recorder--
if you want to have the computer extrapolate new behavior, which is what I 
think you're saying--then you'll solve half the problems in AI along the 
way.  A correlary of the previous sentence is that if half the problems in 
AI can't be solved, what you're proposing is impossible.

-- 
Chris Phoenix               |  "I've spent the last nine years structuring my
cphoenix@csli.Stanford.EDU  |      life so that this couldn't happen."
...And I only kiss your shadow, I cannot see your hand, you're a stranger 
now unto me, lost in the dangling conversation, and the superficial sighs...