[sci.virtual-worlds] VR in art - medium or instrument?

brucec%phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen;;50-662;LP=A;) (11/28/90)

My answer to the question is both; I asked it because I've only seen
discussion in this group of VR as performance.  This was the one
disappointment in the posting from Banff a while ago on their VR in the
arts program.

Just to make sure we all agree on the terms, by "medium" I mean that the
artist creates and maintains a virtual reality as a work of art which
others can enter as observers or participants.  This is the same category
of art as interactive fiction.

There's another use of VR which has just been alluded to: using VR
techniques as a set of tools, an instrument so to speak, to help create art
in other media.  This is conceptually similar to using a computer to create
a drawing or a sculpture (or a book or a song ...).  VR gets into the act
as a set of interface technologies which could greatly enhance the ability
of an artist.

Some examples:

Sculpture - I would love to be able to put on a pair of gloves and a
    set of goggles and sculpt marble (or light or water or clouds or ...)
    with my bare hands the way I can sculpt clay.  You could even sculpt
    moving pieces by moving them and marking positions as keyframes, which
    leads to ...

Animation - anyone out there ever do clay figure animation (Will Vinton
    calls it "Claymation" (tm, I think)?  It's extremely tedious to do, and
    good clay with nice colors that won't melt under the lights is
    expensive.  Instead, sculpt your figures (maybe not with your bare
    hands, no reason you can't have have virtual sculpting tools), and move
    the figure around, pushing the virtual shutter release when you have it
    positioned for the next frame.

Music - I think it was Rick Jacoby who mentioned a virtual theremin in one
    of his postings.  There's no reason that a virtual instrument has to
    copy the interface of a physical one, or that it must sound like
    a physical instrument.  You know, there were some neat instruments in
    the old Dr. Seuss books.  I wonder if I could simulate one of them?

Dance - This, of course, has already been done in Videospace.  There must
    be endless fascinating variations which combine elements of dance,
    puppetry, and visual art by using the movements of the dancers to
    create images.  Imagine Mummenschantz or Imago using such technology.

There must be many more ways of using VR as an instrument.  Anyone else
want to toss some out?
--
------------------------------------------------------------------------
Speaker-to-managers, aka
Bruce Cohen, Computer Research Lab        email: brucec@tekchips.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

mg@munnari.oz.au (Mike Gigante) (11/29/90)

brucec%phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen;;50-662;LP=A;) writes:


>Some examples:

>Sculpture - I would love to be able to put on a pair of gloves and a
>    set of goggles and sculpt marble (or light or water or clouds or ...)
>    with my bare hands the way I can sculpt clay.  You could even sculpt
>    moving pieces by moving them and marking positions as keyframes, which
>    leads to ...

This is our current plan (as soon as we finish aquiring the gear)

Robert Owen is a sculptor here who has experimented with conventional
CG modellers in our lab. Conventional modelling tools are still
very clumsy compared to physical (i.e. using your hands) for a large
class of models.

With a pair of gloves, eyephones and lots of software, we hope to make
a really neat equivilent to clay modelling in VR space.

Ask me  again in 6 months how it is working out..

Mike Gigante, RMIT Australia
mg@godzilla.cgl.rmit.oz.au

brucec%phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen;;50-662;LP=A;) (11/30/90)

In article <11890@milton.u.washington.edu> mg@munnari.oz.au (Mike Gigante) write
s:
> 
> Robert Owen is a sculptor here who has experimented with conventional
> CG modellers in our lab. Conventional modelling tools are still
> very clumsy compared to physical (i.e. using your hands) for a large
> class of models.

Bet your life they're clumsy!

> 
> With a pair of gloves, eyephones and lots of software, we hope to make
> a really neat equivilent to clay modelling in VR space.

I'm salivating already.  That's almost everything I want.  The last little
item is something I forgot to mention in my original posting: haptic
feedback (including what we've been calling force-feedback in postings in
this group) in the gloves.  You can sculpt without feedback (at least
I think you can; can't say I've tried it), but I bet it feels like trying
to mold air with wooden paddles.  Mike, do you have plans to investigate
feedback?  The basic support in the original Dataglove, piezo-vibrators on
the fingers, might be enough to make a big difference, even if you can't
distinguish textures.
--
------------------------------------------------------------------------
Speaker-to-managers, aka
Bruce Cohen, Computer Research Lab        email: brucec@tekchips.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

mg@munnari.oz.au (Mike Gigante) (11/30/90)

brucec%phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen;;50-662;LP=A;) writes:



>In article <11890@milton.u.washington.edu> mg@munnari.oz.au (Mike Gigante) writ
e
>s:
>> 
>> Robert Owen is a sculptor here who has experimented with conventional
>> CG modellers in our lab. Conventional modelling tools are still
>> very clumsy compared to physical (i.e. using your hands) for a large
>> class of models.

>Bet your life they're clumsy!

yes, I have been using such things for quite a while and before I even
knew about VR tecnology I have wanted to just "reach in and grab the
damn thing".

>> 
>> With a pair of gloves, eyephones and lots of software, we hope to make
>> a really neat equivilent to clay modelling in VR space.

>I'm salivating already.  That's almost everything I want.  The last little
>item is something I forgot to mention in my original posting: haptic
>feedback (including what we've been calling force-feedback in postings in
>this group) in the gloves.  You can sculpt without feedback (at least
>I think you can; can't say I've tried it), but I bet it feels like trying
>to mold air with wooden paddles.  Mike, do you have plans to investigate
>feedback?  The basic support in the original Dataglove, piezo-vibrators on
>the fingers, might be enough to make a big difference, even if you can't
>distinguish textures.
>--

This of course is the big question. I have though of surrogate methods
for feedback. One of them is a superposition of a regular grid in the
working space. Anytime part of the object or hand passes through the
any of the cell walls, a projection of the wall is superimposed on the
object/hand. (Is this clear? it is sort of like passing your hand through
a `force-field' as in sf movies.) If these cell boundaries are of
different colours, you can at least tell proximity. It doesn't help as
much as physical feedback, but I don't know much about that area.

One of the other possibilities is little nodules that can be inflated/raised
(or whatever) when you `touch' some VR object. I saw a mouse at siggraph
that had something like that.

UNC's system of active force feedback doesn't seem quite so relevent in
the sculpting case. I dunno, maybe we need an active `straight-jacket' that
you wear. using inverse kinematics, you could constrain the hand position.
yet still allow elbow movement etc. In fact I like this idea...

>------------------------------------------------------------------------
>Speaker-to-managers, aka
>Bruce Cohen, Computer Research Lab        email: brucec@tekchips.labs.tek.com
>Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
>M/S 50-662, P.O. Box 500, Beaverton, OR  97077

Mike Gigante, RMIT Australia
mg@godzilla.cgl.rmit.oz.au

brucec%phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen;;50-662;LP=A;) (12/01/90)

In article <12012@milton.u.washington.edu> mg@munnari.oz.au (Mike Gigante) write
s:
> This of course is the big question. I have though of surrogate methods
> for feedback. One of them is a superposition of a regular grid in the
> working space. Anytime part of the object or hand passes through the
> any of the cell walls, a projection of the wall is superimposed on the
> object/hand. (Is this clear? it is sort of like passing your hand through
> a `force-field' as in sf movies.) If these cell boundaries are of
> different colours, you can at least tell proximity. It doesn't help as
> much as physical feedback, but I don't know much about that area.

Your description's clear enough.  That's one of the techniques we came up
with here at Tek when we were trying to develop better visual feedback for
the monocular 3D display we sold on our graphic workstation.  The problem
there was trying to see where the cursor is in 3 space as you drive it
around with your 6 DOF input device.  With stereo it's not too hard, but
monocular requires additional depth cues and/or intersection cues.  The
wall intersection idea works OK as long as the visual aspect of the wall's
projection on the hand doesn't interfere with the object being manipulated.
So the wall's color can't be too opaque (or it has to be a relatively
coarse mesh), and it's texture can't beat destructively with the object's
texture.

The difficulty with any of those techniques is, of course, that they're
visual, and sculpting is a kinesthetic activity (if you doubt that, give
almost anyone a blob of clay and see what she does with it; most people
roll it out their hands like sausage for the tactile sensation of it).

> 
> One of the other possibilities is little nodules that can be inflated/raised
> (or whatever) when you `touch' some VR object. I saw a mouse at siggraph
> that had something like that.
> 

I've been playing around with a lot of ideas for tactile feedback; there
don't seem to be any technologies which are really well-suited to the job,
either because they require lots of apparatus surrounding the user, like
the NASA hand, which would probably put a lot of people off, or because
they require manufacturing techniques which are currently only one-off,
like making a glove with several hundred rapidly inflatable/deflatable
pockets on the fingertips and palm.

As far as I can see there are three different types of feedback, with
different uses and requirements:

    1) Position feedback - used to prevent movement through a volume in
       which there is a virtual object.  You could consider it an extreme
       case of type 2, but so far all the implementations I've been able to
       dream up require seperate mechanisms.  One way to fake this is to
       have real objects taking up the same space as the virtual ones, and
       just use the VR interface to change the aspect of the objects (like
       the phoney Ming vase I mentioned in a previous posting).  It's a
       limited fake, though.

    2) Force feedback - used to feed back resistance to movement, either
       from viscosity in the medium (e.g., moving around underwater),
       resistance of an object to being moved from inertia or friction, or
       resistance of a material to being deformed (this is the primary use
       in sculpting).  This category is actually two areas: global, which
       involves the whole body's interaction with the medium, and local,
       which could just involve the hands and the virtual objects they're
       holding.

    3) Tactile feedback - used to provide small-scale information on
       surface texture.  This is the really nasty one, because it requires
       lots of effectors.   Unfortunately for this discussion, it's also
       useful in sculpting, because you do want to be able to sense and
       control the finish of an object.  High quality rendering with
       specular reflection and good texture reproduction would help in
       seeing the finish.

> UNC's system of active force feedback doesn't seem quite so relevent in
> the sculpting case. I dunno, maybe we need an active `straight-jacket' that
> you wear. using inverse kinematics, you could constrain the hand position.
> yet still allow elbow movement etc. In fact I like this idea...> 

Yep, that's the way to solve the global problem for force, *if* people are
willing to put on a straight-jacket.  To handle position, you still have to
anchor the straight-jacket in absolute coordinates.

--
------------------------------------------------------------------------
Speaker-to-managers, aka
Bruce Cohen, Computer Research Lab        email: brucec@tekchips.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077