[sci.virtual-worlds] Using bare hands

gourdol@imag.imag.fr (Gourdol Arnaud) (10/11/90)

Using data gloves or data suit is fine, but it's not a very light
process. You have to put the gloves, there are cables around you...
I am working in a research group where we are experimenting
multi-modal interfaces. One of the mode of communication we are
investigating is communicating using our bare hands.
To do this, two camera are mounted on the computer screen and
are analyzing the images to build a skeleton of the user's hands.
Those hands can then be displayed on screen, and of course follow
the movements of the user. The idea of course, being to control
the computer screen by direcltly manipulating it's content.
We are also testing simultaneous use of speech to issue commands.

Would any of you have any idea on how to use this particular
interface to do things that can't be done with the datagloves ?

brucec%phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen;;50-662;LP=A;) (10/16/90)

The camera hand-tracker has some advantages over gloves other than the
obvious one of convenience to the user:

    It doesn't need to be calibrated to produce useful flexure information
    for the fingers; the light fiber sensors on the Dataglove (tm?)do have
    to be calibrated at least everytime you put them on if you want any
    degree of precision in the measurements.

    If the cameras can be automatically swivelled to follow the user, the
    operating volume is much larger than for the Polhemus sensor in the
    Dataglove.

    You can get information on the position and orientation of the arms.

One thing that you could do with the cameras is wave your hands as if
conducting an orchestra, or playing an instrument.
---------------------------------------------------------------------------
Speaker-to-managers, aka
Bruce Cohen, Computer Research Lab        email: brucec@tekcrl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077
    
--
---------------------------------------------------------------------------
Speaker-to-managers, aka
Bruce Cohen, Computer Research Lab        email: brucec@tekcrl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

stacey@ria.ccs.uwo.ca (Deb Stacey [SDE]) (10/17/90)

I am posting this for a friend:

-------------------------------------------------------------

Have you investigated David Rokeby's "A Very Nervous System", or 
Vincent J. Vincent/Frank MacDougall's  "Mandala System"?

The former uses cameras to track motion;  it was originally designed to
be used with dancers.  It has sinced been adapted to  provide an interface
for physically handicapped people playing musical instruments.  

The latter uses cameras to capture the form of the "user" - the user may then
interact directly with screen objects.

Both of these systems were demonstrated at CHI '90's "interactive experience".
There is a Canadian Broadcasting Corp.  show about the "Very Nervous System"
players (I believe they played with Liberace in Vegas at one point...).

Leslie Daigle.
leslie@snowhite.cis.uoguelph.ca

-- 
Deb Stacey, Systems Design Engineering, Univ of Waterloo, Waterloo, Ont, CANADA 
  CSNET  : stacey%watdcsu@waterloo.csnet                                     
  ARPA   : stacey%watdcsu%waterloo.csnet@csnet~relay.arpa                    
  USENET : utzoo!watmath!watdcsu!stacey                                     

lishka@uwslh.slh.wisc.edu (a.k.a. Chri) (10/20/90)

gourdol@imag.imag.fr (Gourdol Arnaud) writes:

>I am working in a research group where we are experimenting
>multi-modal interfaces. One of the mode of communication we are
>investigating is communicating using our bare hands.
>The idea of course, being to control
>the computer screen by direcltly manipulating it's content.
>We are also testing simultaneous use of speech to issue commands.

>Would any of you have any idea on how to use this particular
>interface to do things that can't be done with the datagloves ?

This is a very interesting area, one that I hadn't ever thought about.
Please keep us informed of your progress.

One thing that comes to mind right away is that it might be easier to
detect multi-hand actions with your system than with two datagloves.
For instance, it might be easier to correlate hand positions relative
to each other with your computer-vision system.  E.G.  interpreting
sign language, where some of the signs use both hands, might be
easier.  Another possibility is using the whole arm rather than just
the hand (the arm movements are not as complex as hand movements, so
this might be an easy extension of the system you have now).  E.G.
detecting wing-flapping motions as "flying" (mild ;-) "I just flew in
from virtual-New York, and boy, are my arms ever tired!"). 

On the other hand, a computer vision system would likely not be able
to measure force, where a modified dataglove could.  I like the idea
of being free of wires and "exoskeletons" when moving in a VR.

This is all just educated guessing, of course.

                                                .oO Chris Oo.
-- 
Christopher Lishka 608-262-4485  "Dad, don't give in to mob mentality!"
Wisconsin State Lab. of Hygiene                                -- Bart Simpson
   lishka@uwslh.slh.wisc.edu     "I'm not, Son.  I'm jumping on the bandwagon."
   uunet!uwvax!uwslh!lishka                                    -- Homer Simpson

hughes@volcano.Berkeley.EDU (Eric Hughes) (11/08/90)

In article <9319@milton.u.washington.edu> gourdol@imag.imag.fr
(Gourdol Arnaud) asks what camera sensing accomplishes better
than datagloves.

The dataglove I have used (one of VPL's) cannot detect the following
gestures, all of which cameras could.

1) fingertip touches, e.g. make an OK sign.

2) more generally, contact of any sort.

3) the articulation of the bones in the palm (could somebody
supply the name?) in their motion from the wrist.  As a special
case, there is the position of the base thumb knuckle relative
to the back of the hand.

4) quick hand motions.  I would love to be able to draw letters in
midair and have them appear in the space; generalize to Chinese
characters.  Another gesture I would like to have is "wiggle your
pinky."  The sampling limitation of the Polhemus is part of the
problem; a quick 90 degree turn and back may not take.  The rest may
or may not be "a simple matter of software."  I don't know.

5) the other angular degree of freedom in the articulation of the
finger from the palm.  Currently only one angle is sensed.  An example
of a gesture is the "whoop-de-do" motion of a straight forefinger
tracing out a circle with the tip.


Here is a litmus test of the power of any hand sensor: can it reliably
detect the sign language alphabet at the rate of four characters per
second?

Eric Hughes
hughes@ocf.berkeley.edu