[sci.virtual-worlds] facial gestures in VR

dbs@speech2.cs.cmu.edu (David Sanner) (11/03/90)

  hey there,

    recently, folks have been posting about various devices for increased 
  data input.  it seems to me that humans use an awful lot of 
  information from each others physical bodies, e.g. body language.  the 
  data suit/gloves/eye-trackers cover most of the useful areas, but one major
  source of information/muscle grouping goes untapped: the face!  
    body language, and in particular facial gestures, seem to convey an 
  awful lot of "extra" information that can help us define the context of
  converstation, for example.  i propose then, the DATAMASK(tm)(of course ;-)
  this acts like a data glove, but is able to translate the activation levels 
  of facial muscles into a stream of data.
    any thoughts?

   ... dave sanner

  dbs@speech2.cs.cmu.edu 

sobiloff@acc.stolaf.edu (Chrome Cboy) (11/05/90)

In article <10521@milton.u.washington.edu> dbs@speech2.cs.cmu.edu (David Sanner)
 writes:
>    body language, and in particular facial gestures, seem to convey an 
>  awful lot of "extra" information that can help us define the context of
>  converstation, for example.  i propose then, the DATAMASK(tm)(of course ;-)
>  this acts like a data glove, but is able to translate the activation levels 
>  of facial muscles into a stream of data.
>    any thoughts?

The Media Lab played with this concept a number of years ago (1979 was when
the project received funding from DARPA). They had a conference room with
plastic faces for individuals that were at remote sites. The faces were
actually molded faces of a video tube which displayed a picture of a remote
individual. Thus, individuals at remote sites had a camara pointed at
their face which transmitted their facial gestures to remote sites, and were
surrounded by plastic faces of the other individuals they were conferencing
with. Nicholas Negroponte claims that the effect was incredibly realistic,
but that business thought it was too frivolous.

It also seems that the rock group, "Talking Heads," took their name from
this project, and that the cover art on their first album was done by
students at MIT who were working on the Talking Faces project and who
had demo'd the system for the band.
--
                                                        ______________
_______________________________________________________/ Chrome C'Boy \_________
| "One of the biggest obstacles to the future of computing is C. C is the last |
| attempt of the high priesthood to control the computing business. It's like  |
| the scribe and the Pharisees who did not want the masses to learn how to     |
| read and write."                        -Jerry Pournelle                     |

rnm@uunet.UU.NET (Robert Marsanyi) (11/08/90)

re: facial data capture - it seems to me this information is mostly useful
to convey expression to another human being visually.  Hence, just conveying
an image of someone's face will do, no muscle-sensing or other contortive
stuff required.

--rbt