[comp.graphics] A thought on facial representations

sun@me.utoronto.ca (Andy Sun Anu-guest) (11/04/90)

Hi,

I was reading a magazine (computer graphics world) on the issue of
simlulating a human face. The process used was to digitalize a real
face into polygons and manipulate those entities to create facial
expressions. I wonder if anyone out there has tried using boundary
surface representations or NURBS-based surfaces to model a human face
and then changing the expression by varying the curve parameters. A
more crazy thought, if one can use fractals to simulate toplogies and
waves, is it possible to use factal science to describe an entire human 
face?

BTW, this is not my field of research (I am into robotic sensors), but I
am interested in computer graphics. I cannot see any immediate practical 
applications in the above (maybe let plastic surgens preview their 
patient's final outlook, facial animation and making molds of faces like
in Mission Possible). I am just interested if it is conceptually possible 
and whether anyone has explored this idea yet.

Andy

_______________________________________________________________________________
Andy Sun                            | Internet: sun@me.utoronto.ca
University of Toronto, Canada       | UUCP    : ...!utai!me!sun
Dept. of Mechanical Engineering     | BITNET  : sun@me.utoronto.BITNET

musgrave-forest@cs.yale.edu (F. Ken Musgrave) (11/04/90)

  We all know how imprudent it is to say that something will never happen,
but it is unlikely that fractals will be found useful in facial modelling.
The essential feature of a fractal model is self-similarity over a range of
scales.  It would be surprising if this were found to apply to aspects of
modelling the human face...

  Fractal geometry is a powerful language for describing Nature, but it is 
certainly not the Last Word.  We're currently in the heat of finding out 
just what it is, and isn't, good for.

				Hey, Like, Prove Me Wrong, Please!
								 Ken
-- 
Ken Musgrave			musgrave-forest@yale.edu
Yale U Depts of CS and Math	(203) 432-4016
Box 2155 Yale Station		"But Mr. Natural! is there any future?!?"
New Haven, CT 06520		"Not yet."

turk@media-lab.media.mit.edu (Matthew Turk) (11/07/90)

>  .... I cannot see any immediate practical 
>  applications in the above (maybe let plastic surgens preview their 
>  patient's final outlook, facial animation and making molds of faces like
>  in Mission Possible). ....

There are a number of immediate practical applications of rendering
and animating faces.  One interesting application is close to what you
mentioned -- surgeons using a 3D graphics system before (to plan) and
during (to help in the surgery) craniofacial surgery.  Model-based
coding is another area of active research which addresses facial
animation -- on one end, the analysis stage, parameters describing the
face in view are calculated -- on the other end, the synthesis stage,
the face is reconstructed and animated for the viewer.  The
interesting applications of this are not just in reducing bandwidth,
but in ways you can manipulate the data at the recieving end.

	Matthew Turk
	MIT Media Lab			turk@media-lab.media.mit.edu

andyrose@batcomputer.tn.cornell.edu (Andy Rose) (11/08/90)

Another thought on facial representation comes from the advertising arena.
By animating faces, a company can "design" its product rep. Imagine
a somewhat white-red-yellow-black-asian-latin-nordic type.

Newscasters would of course be replaced, since the animated face never needs
makeup and never looks tired and never makes a mistake.

In a true 2 way environment I could select the face I wish to tell me the news.

Animated faces also never age so you can have that genial Dan-Walter-Tom-Rather-
Cronkite-Brokaw image f o r e v e r...

Of course, animated animals could deliver the news for children and who
know what MTV will do.

  
-- 
Andrew Newkirk Rose '91 Department of Visualization CNSF/Theory Center
632 E & T Building, Hoy Road Ithaca, NY 14583  
607 254 8686  andy@cornellf.tn.cornell.edu

Alvin@cup.portal.com (Alvin Henry White) (11/08/90)

In this line of thinking, I have been trying to hatch a set up that takes
a TV signal and buffers it to a video tape for a couple of seconds. At
the same time taking out the close captioned subtitles  and running them
through a computer translator word for word and comming back an dubbing
a second line under the first that has your desired second language. At
the same time running the second language through a text to speech processor
and outputting a stereo signal that has your desired language on the second
channel.  Now with facial animation you could have a second monitor for the
other eye that synthesized the facial expressions. If you had an inset
picture that speech teachers use to teach speech, the kind that shows the
position of the tongue, teeth, and whether or not air is being expelled,
we could teach everybody on earth how to speak our world language while
watching the 6 O'Clock news.

Alvin H. White, Gen. Sect.
G.O.D.S.B.R.A.I.N.
Government Online Database Systems
Bureau for Resource Allocations to Information Networks
[ alvin@cup.portal.com (OR) ..!sun!portal!cup.portal.com!alvin ]

nad@cl.cam.ac.uk (Neil Dodgson) (11/09/90)

In article <35703@cup.portal.com> Alvin@cup.portal.com (Alvin Henry White) writes:
>In this line of thinking, I have been trying to hatch a set up that takes
>a TV signal and buffers it to a video tape for a couple of seconds. At
>the same time taking out the close captioned subtitles  and running them
>through a computer translator word for word and comming back an dubbing
>a second line under the first that has your desired second language. At
>the same time running the second language through a text to speech processor
>and outputting a stereo signal that has your desired language on the second
>channel.  Now with facial animation you could have a second monitor for the
>other eye that synthesized the facial expressions. If you had an inset
>picture that speech teachers use to teach speech, the kind that shows the
>position of the tongue, teeth, and whether or not air is being expelled,
>we could teach everybody on earth how to speak our world language while
>watching the 6 O'Clock news.

One of the proposed HDTV standards has the requirement for EIGHT channels
of sound.  One of the proposed uses of these channels is to broadcast, say,
the commentary to a major sporting event in five or six different languages
and to put the crowd noise, etc on the other two or three channels.  The
viewer can then choose which language to get a commentary in (if any :-)
and how much crowd noise to mix in.

This has very little to do with comp.graphics tho' --- but I thought you'd
be interested anyway!



Neil Dodgson,           | nad@cl.cam.ac.uk
Computer Laboratory,    |
Pembroke Street,        |
Cambridge, U.K. CB2 3QG |

uad1077@dircon.uucp (11/10/90)

And of course, animated political cartoons....

I beleive Keith Waters actually did this for his SIGGRAPH paper a few
years ago, but he used a script from a fairly vicious British TV satire
programme, and I heard the audience went rather quiet.. Does anyone
remember this?  Can they confirm it?

-- 
Ian D. Kemmish                    Tel. +44 767 601 361
18 Durham Close                   uad1077@dircon.UUCP
Biggleswade                       ukc!dircon!uad1077
Beds SG18 8HZ United Kingd    uad1077%dircon@ukc.ac.uk