[sci.virtual-worlds] tried to post:bounced:thought I'd try this address...

wave@media-lab.media.mit.edu (Michael B. Johnson) (11/07/90)

I tried to post this to sci.virtual.worlds, but it bounced, so I
thought I'd try this address.  Hope it gets through...


To: mit-eddie!sci-virtual-worlds
Path: media-lab!wave
From: wave@media-lab.MEDIA.MIT.EDU (Michael B. Johnson)
Newsgroups: sci.virtual-worlds
Subject: Re: facial gestures in VR
Message-Id: <3915@media-lab.MEDIA.MIT.EDU>
Date: 5 Nov 90 11:31:56 GMT
References: <10521@milton.u.washington.edu> <10591@milton.u.washington.edu>
Reply-To: wave@media-lab.media.mit.edu (Michael B. Johnson)
Organization: MIT Media Lab, Cambridge MA
Lines: 66


(Chrome Cboy) writes:
>>
>>(David Sanner) writes:
>>>    body language, and in particular facial gestures, seem to convey an 
>>>  awful lot of "extra" information that can help us define the context of
>>>  converstation, for example.  i propose then, the DATAMASK(tm)(of course ;-)
>>>  this acts like a data glove, but is able to translate the activation levels
 
>>>  of facial muscles into a stream of data.
>>>    any thoughts?
>>
>>The Media Lab played with this concept a number of years ago (1979 was when
>>the project received funding from DARPA). They had a conference room with
>>plastic faces for individuals that were at remote sites. The faces were
>>actually molded faces of a video tube which displayed a picture of a remote
>>individual. 

Actually, the faces were just plaster casts of individuals' faces, and the 
video was just projected on the white busts.  Very low tech - no need for esoter
ic
warpable video tubes...  I think the actually might even have just sent the 
deltas, i.e. what changed from frame to frame, and that was why the whole thing 
was so cool, because the communication bandwidth necessary to do the thing was
so incredibly small.

>>Thus, individuals at remote sites had a camara pointed at
>>their face which transmitted their facial gestures to remote sites, and were
>>surrounded by plastic faces of the other individuals they were conferencing
>>with. Nicholas Negroponte claims that the effect was incredibly realistic,
>>but that business thought it was too frivolous.
>>
>>It also seems that the rock group, "Talking Heads," took their name from
>>this project, and that the cover art on their first album was done by
>>students at MIT who were working on the Talking Faces project and who
>>had demo'd the system for the band.

I don't believe Talking Heads took their name from this, but I could be wrong.
Walter Bender, who is still a research scientist here, did work on the cover
art for the "Remain in Light" album (see Computer Images: credit for Walter).
[Was "Remain in Light" their first album?]

The apocraphyl story goes that Nicholas liked the video Talking Heads 
project, and whenever he would glance at the computer time signup sheet, he
would be pleased that "Talking Heads" had signed up for so much time.  It
wasn't till later that he found out that the "Talking Heads" project that was
signing up for all the time had nothing to do with the video Talking Heads
project he was thinking of...


On a more current note, I posted a note related to the original thread of this
discussion a few months ago.  Some guys in the Vision & Modeling group here at
the Lab can take a depth image of a face and generate a deformable superquadric
representation of that face.  Couple that with the real time texture mapping
of today's high end Stardent or SGI VGX box, and you could do virtual Talking
Heads by digitizing a person against a blue screen as they spoke, masking out
the matte, and texture mapping it to the superquad!  Pretty cool...

>>--
>>                                                        ______________
>>_______________________________________________________/ Chrome C'Boy \_______
__


-- 

-->  Michael B. Johnson
-->  MIT Media Lab      --  Computer Graphics & Animation Group
-->  (617) 253-0663     --  wave@media-lab.media.mit.edu