[net.graphics] Texture mapping

ken@turtlevax.UUCP (Ken Turkowski) (07/03/85)

In article <221@cmu-cs-h.ARPA> rfb@cmu-cs-h.ARPA (Rick Busdiecker) writes:
>Does anyone have an algorithm they'd be willing to share for mapping a
>raster image to an arbitrary polygon in space?  How about a pointer to a
>published algorithm?

Texture mapping is basically a resampling problem.  First you need to
generate a mapping from the image to the polygon, then you need to
invert it.  For every point in the target, map it back into the source,
and apply a digital filter to the neighborhood of the source, so that
the target image meets the Nyquist criterion (Shannon sampling
theorem).

A great paper, one that convinced me that the Gaussian point spread
function is the best all-around PSF for image transformations and
arbitrary anti-aliased graphics, is one by Feibush et al, called
something like "texture mapping using digital filters", presented at
the Seattle SIGGRAPH (1980?).

For computational efficiency, you should look at the separable
algorithm presented by Alvy Ray Smith and Ed Catmull at the same
SIGGRAPH.

Also is "Pyramidal Parametrics" by Lance Williams (perhaps SIGGRAPH
1981?) and a paper presented by Frank Crow at SIGGRAPH 1984.
-- 

Ken Turkowski @ CADLINC, Menlo Park, CA
UUCP: {amd,decwrl,hplabs,nsc,seismo,spar}!turtlevax!ken
ARPA: turtlevax!ken@DECWRL.ARPA

shep@datacube.UUCP (07/06/85)

 In article <221@cmu-cs-h.ARPA> rfb@cmu-cs-h.ARPA (Rick Busdiecker) writes:
>Does anyone have an algorithm they'd be willing to share for mapping a
>raster image to an arbitrary polygon in space?  How about a pointer to a
>published algorithm?

 And Ken Turkowski @ CADLINC, Menlo Park, CA replies:
>Texture mapping is basically a resampling problem.  First you need to
>generate a mapping from the image to the polygon, then you need to
>invert it.  For every point in the target, map it back into the source,
>and apply a digital filter to the neighborhood of the source, so that
>the target image meets the Nyquist criterion (Shannon sampling
>theorem).

For mapping a 2-d raster scan image onto a 2-d polygon in three space the
"backward mapping" Turk describes is efficient. Furthermore, many types
of mappings, i.e. quadratic division, are separable, as was noted. A most
common use of this is the separated perspective backward mapping used
in the Ampex Digital Optics television special effects device. It uses
separate horizontal and vertical FIR interpolators to interpolate from
several source pixels a single target pixel. A good description of this
is in the US patent assigned to Ampex by architect Steve Gabriel and
engineer Phil Bennett.

General purpose hardware to perform these backward mappings in real-time
will be available by year's end. "General purpose" implies that for
every target pixel, there is a address in a "comes from" address store
that holds the source location in 2-space for the target pixel. Although
this is optimal for many applications, it does have it's pitfalls:

  Bi-cubic patches, a computationally efficient way of describing a
surface map, do NOT separate in the general case.

  Backward mapping has the problem of usually selecting only one point
in source space. Thus making it useless for many mappings. Enter 
"forward mapping"; where each source pixel is littered at different
amplitudes in the target space frame accumulator. Forward mapping has
destroyed many good hardware engineers and at least one company has
made a product that shows exactly how -NOT- to do it!

This mapping/warping stuff is real computationally intensive, usually
because of the interpolation, sometimes in the address generation. Array
processors take seconds to "warp" a 512*512*8*3 image. Thus for many
applications, dedicated hardware is required.

Shep Siegel                    ihnp4!datacube!shep
Datacube Inc.        ima!inmet!mirror!datacube!shep
617-535-6644         decvax!cca!mirror!datacube!shep
4 Dearborn Rd.       decvax!genrad!wjh12!mirror!datacube!shep
Peabody, Ma. 01960   {mit-eddie,cyb0vax}!mirror!datacube!shep

ken@turtlevax.UUCP (Ken Turkowski) (07/16/85)

In article <6700018@datacube.UUCP> shep@datacube.UUCP writes:
>  Backward mapping has the problem of usually selecting only one point
>in source space. Thus making it useless for many mappings.

That's right.  The NEIGHBORHOOD covered by the convolution kernel in
the target space as well as the TARGET POINT must be transformed by the
inverse mapping.

For linear transformations (i.e. mapping to a polygon) this is
trivial.  For "nice" nonlinear transformations (differentiable to
several orders) the neighborhood mapping can be linearized on a
per-target-pixel basis.

If the mapping is really warped, like a twisting transformation, the
rectangular neighborhood in the target could map into a weird shape
like a bow-tie or its inverse, where sided-ness changes like a Moebius
strip.  It is possible to map an image onto things like saddles, but
such cases must be dealt with on an individual basis:  it is hard to
get a general algorithm to work fast on the easy cases, yet do the
right thing for the hard ones.
-- 

Ken Turkowski @ CADLINC, Menlo Park, CA
UUCP: {amd,decwrl,hplabs,nsc,seismo,spar}!turtlevax!ken
ARPA: turtlevax!ken@DECWRL.ARPA