[sci.virtual-worlds] U-Force!!??!!??

hlr@well.sf.ca.us (Howard Rheingold) (04/07/90)

philbo@chaos.cs.brandeis.edu (Phil Gross) writes:


>Without more than visual and aural feedback, VR is *never* going to be
>as good as the real thing.  U-F seems to be just a miniature motion
>detector, or a 3-d version of those old touch screens.  Do you
>remember those?  They had a slightly recessed screen, and lots of
>holes where beams of indeterminate nature were sent parallel to the
>screen a few milimeters away.  A 3d infared box that can do this is no
>big deal, the important value is that they can afford to market it so
>cheaply.

>I disagree that this infared technology has much use in a virtual
>reality/cyberspace setting.  One of the biggest advances in the last
>few years in vr research is the use of the VPL data-glove, a glove that
>not only senses motion, but can simulate textures as well...



I visited VR research facilities at ATR, MITI, Fujitsu, and NTT
recently. People at ATR and NTT are aiming at "wireless" VR. They
showed me some demos of a system that involves video, machine vision
and neural network software. A video camera tracks the user. Pattern
detection software attempts to track the user's gestures. Neural
networks help the system learn the user's gestural repertoire. They
have a way to go. Does anybody who is knowledgeable about machine
vision and neural networks have an opinion about how well such a
system might work?

I've played with the dataglove a fair amount, and unless they have
come up with something startling and new, the glove does NOT simulate
textures. Margaret Minksy at Media Lab and UNC has been using a
force-reflective joystick with her own software that does a remarkable
job of simulating texture. But it is a joystick, not a glove.