[comp.graphics] texture mapping

rayk@sbcs.sunysb.edu (Raymond T Kreisel) (07/02/88)

  I am looking for public domain C code that would allow me to map
a bitmap on to a cylinder or a cone. I am also looking for C code
that would do J. Blinn's 'bump mapping' onto both cylinders and cones.
If anyone has either of these could they plese e-mail the code to the
address below.

				thanks in advance,

					ray



---------------------------------------------------------------------------
 Ray Kreisel   CS Dept., SUNY at Stony Brook, Stony Brook NY 11794
UUCP: {allegra, philabs, pyramid, research}!sbcs!rayk   
ARPA-Internet: rayk@sbcs.sunysb.edu			CSnet: rayk@suny-sb
 "If I get home before daylight, I just might get some sleep tonight...."
---------------------------------------------------------------------------

valke@cs.vu.nl (Peter Valkenburg) (12/02/88)

			Hello,

I have the following problem:
	I want to display surfaces (polygons) in 3-D scenes containing
	user-defined textures/pictures.  As an example you can think of a
	painting which has been raster scanned and whose raster image must
	be defined to be the texture of a polygon that is the canvas part
	of the same painting in a computer generated 3-D scene of the
	Louvre (Paris, France).
	Or, to give you a more realistic idea of what I'm struggling with
	here, look at this:

	Front view of surface:

		-------------------------
		|			|
		|	    *		|
		|	   ***		|
		|	  *****		|
		|	 *******	|
		|	*********	|
		|      ***********	|
		|     *************	|
		|	   |=|		|
		|	   |=|		|
		|			|
		-------------------------


	View on the display (including perspective projection):

			 -------
			/   *   \
		       /   ***   \
	   	      /  *******  \
		     /      "      \
		    -----------------


	The questions I'd like to pose are the following:

	1) In the definition of such a texture polygon the raster image (the
	   tree) should be associated with the polygon that bounds it (the
	   square).  Given a raster image and an arbitrary polygon in 3-D
	   world coordinates, how does one link them up, i.e. how should one
	   specify the position of one relative to the other?

	2) What kind of operations should work on the special type of
	   polygon mentioned above?  The standard transformations that
	   apply to simple polygons (such as transforming to viewport
	   coordinates or clipping) could be used, but computing the
	   transformation of every raster scan "pixel" might be a very
	   costly (i.e. inefficient) way of doing things.

	3) What kind of standard operations should eventually be used to
	   display the texture polygon?  Some standard `fill area' algorithm
	   and a low-level function like set_pixel(x,y) to display the
	   polygon might do, but this would cause a lot of overhead, both in
	   terms of code to write, and of computational efficiency.

	4) Finally, since the "pixels" of the raster image need not
	   correspond with pixels on the actual display screen due to the
	   mapping (transformation) of the polygon I need an algorithm to
	   interpolate pixels in portions of the display on which the raster
	   scanned pixels are sparsely mapped, and select pixels in portions
	   on which too many pixels are mapped.

I would like to have some pointers to books/articles and, if you have any
idea how to deal with this, your opinion about these problems.
Specifically, I would be interested to know about any standard interfaces
dealing with this (as far as I know, GKS doesn't provide a convenient
set of routines).  If there is any source floating around that does the
work, I'm also happy to hear about it.  Although I finally have to work this
out in Turbo-Pascal on an IBM pc, algorithms written in any similar language
(yes, including C) running on other displays (like a sun or olivetti) would
be very valuable.

Thanx (in advance, that is).

				Peter Valkenburg (valke@cs.vu.nl)
				Prehistoric mail path: ..!mcvax!botter!valke

david@epicb.UUCP (David P. Cook) (12/07/88)

In article <1737@solo9.cs.vu.nl> valke@cs.vu.nl (Peter Valkenburg) writes:
>
>			Hello,
>
>I have the following problem:
>	I want to display surfaces (polygons) in 3-D scenes containing
>	user-defined textures/pictures (raster images).
>
     The combination of Raster Imagery into 3D scenes is dependent on your
     paticular application.  However, using simple 2D primitives you can
     easily place a Raster Image ANYWHERE within a 3D scene.  This is called
     FAKING IT.

>	The questions I'd like to pose are the following:
>
>	1) In the definition of such a texture polygon the raster image (the
>	   tree) should be associated with the polygon that bounds it (the
>	   square).  Given a raster image and an arbitrary polygon in 3-D
>	   world coordinates, how does one link them up, i.e. how should one
>	   specify the position of one relative to the other?

     Think of the Raster Image as being bounded by a polygon with four
     corners (rectangle).  Mapping this 2D polygon into 3D space is very
     straight forward, and simply requires taking each and every point
     within the polygon and transforming it to the desired 3D point.  This
     requires scaling and translation on a point by point basis.  You should
     also utilize antialiasing/interpolation to deal with zoom up/down
     effects. Of course you can speed up the process by making use of the fact 
     that the original image is in raster order.

>
>	2) What kind of operations should work on the special type of
>	   polygon mentioned above?  The standard transformations that
>	   apply to simple polygons (such as transforming to viewport
>	   coordinates or clipping) could be used, but computing the
>	   transformation of every raster scan "pixel" might be a very
>	   costly (i.e. inefficient) way of doing things.

     An alternative here is to write some form of "plastic" algorithm.  This
     type of algorithm takes the points on the edge of the polygon and 
     transforms them into the new space (simple).  Next, it takes the scan-
     lines between two points on the source polygon and maps them to the 
     same points on the destination polygon, thus taking care of scaling,
     translation, rotation, and perspective in one step... For example,
     consider the following CRUDE attempt:

           Source Polygon                  Destination Polygon
	  Containing Raster                To Contain Raster
          0---------------1                           0
	  2               3                         2   \
	  4               5                      4       \
	  6               7                   6           1
	  8---------------9                8             3
	                                     \          5
                                                \      7
                                                   \
                                                      9
     In the above example, the SOURCE polygon contains four endpoints (labeled
     0, 1, 9 and 8).  The other numbers indicate endpoints on the edge of
     scanlines (ie.. scanline 2 - 3, 4 - 5, 6 - 7 etc..  0 - 1 and 8 - 9 are
     also endpont edges to scanlines).  Now, simply map those scanlines to the
     SAME endpoints in the destination polygon (ie.. scanline 2 - 3, 4 - 5 etc.)
     To take care of situations where the edgelists are different in size, you
     should interpolate the in-between points.  The same for scanline spreads
     in the destination polygon.  Simply do an interpolative spread of the
     color values to fill in the "missing" points.

     To map this into "3D" space (which is an illusion to begin with) simply
     project the four original endponts (0, 1, 9 and 8) into the desired 3D
     space positions, and interpolate the Z axies for each of the scanline
     edgepoints in the destination polygon.  This will provide the proper
     projection.

     Note... if you use a polygon with MORE than four points, you can also
     introduce distortion and warping into the image.  With more than four
     points you can, for example, bend a raster image around a surface
     WITHOUT having to deal with complex real-space calculations!

     The interpolation step can be as simple or complex as desired.. At the
     simplest, use AVERAGING to combine pixels and SPREADING to create pixels.

     In AVERAGING... simply add all pixels which overlap (either ALL [crudely]
     or weighted based on coverage [better]) and divide by the # of pixels
     (or by the totaled weight).

     In SPREADING, write a simple routine which will generate N numbers between
     two other numbers.  For example, take the following:

	    N   M                 N  .  .  .  .  .  .  .  .  M
	    0   9                 0  1  2  3  4  5  6  7  8  9

	    number_of_values_needed = 10

	    incrementer = (M - N + 1) / number_of_values_needed
	    for (l = N, M, incrementer);  use (l);  end_for

     NOTE:  THIS IS BUT ONE OF THE MANY MANY WAYS...  There are also methods
	    to do the texture mapping via strict polygon manipulation as well
	    as clasic texture mapping techniques.  I prefer "faking it" if
	    possible because it is usually faster.  Also... better antialiasing
	    and interpolation methods exist than what I have demonstrated here,
	    but these are fast and produce an "acceptable" result depending on
	    your needs (and timeframe).

	    Thats All Folks!
-- 
         | David P. Cook            Net:  uunet!epicb!david        |
         | Truevision Inc.  |   "Sometimes I cover my mouth with   |
         | Indianapolis, IN |    my hand to tell if I'm breathing" |
         -----------------------------------------------------------

ccoprrm@pyr.gatech.EDU (ROBERT E. MINSK) (09/06/89)

  I am currently trying to add texture mapping to my ray tracer. The problem
I  am  having  is  texture  mapping  and  normal  interpolation  to  convex
quadrilaterals and triangles.  The situation is as follows:
  Given 4 points forming a convex planer quadrilateral,the texture verticies
associated with each point, and the intersection point on the quadrilateral,
find the associated (u,v) mapping coordinates for the intersection point.
For example:
  4 points       the associated texture vertices
  p1=(-5, 1, 2)  t1=(.2,.4)
  p2=(-2,-3, 6)  t2=(.4,.2)
  p3=( 2,-1, 4)  t3=(.8,.3)
  p4=( 1, 4,-1)  t4=(.7,.5)
intersection point = (-2,-1, 4)
find the (u,v) mapping coordinates

I assume a triangle will be the same mapping with two vertices sharing the
same points and mapping coordinates.
                                    Thanks in advance.
-- 
ROBERT E. MINSK
Georgia Insitute of Technology, Atlanta Georgia, 30332
uucp: ...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!ccoprrm
ARPA: ccoprrm@pyr.gatech.edu

prem@geomag.fsu.edu (Prem Subrahmanyam) (09/06/89)

   I would strongly recommend obtaining copies of both DBW_Render and
   QRT, as both have very good texture mapping routines.  DBW uses 
   absolute spatial coordinates to determine texture, while QRT uses
   a relative position per each object type mapping.  DBW has some
   really interesting features, like sinusoidal reflection to simulate
   waves, a turbulence-based marble/wood texture based on the wave
   sources defined for the scene.  It as well has a brick texture,
   checkerboard, and mottling (turbulent variance of the color intensity).
   Writing a texture routine in DBW is quite simple, since you're provided
   with a host of tools (like a turbulence function, noise function, color
   blending, etc.). I have recently created a random-color texture that 
   uses the turbulence to redefine the base color based on the spatial point
   given, which it then blends into the object's base color using the color
   blend routines.  Next will be a turbulent-color marble texture that will
   modify the marble vein coloring according to the turbulent color.  Also
   in the works are random color checkerboarding (this will require a little
   more thought), variant brick height and mortar color (presently they are
   hard-wired), the list is almost endless.  I would think the ideal ray-tracer
   would be one that used QRT's user-definable texture patches which are then
   mapped onto the object, as well as DBW's turbulence/wave based routines.
   The latter would have to be absolute coordinate based, while the former can
   use QRT's relative position functions.  In any case, getting copies of both 
   of these would be the most convenient, as there's no reason to reinvent the
   wheel.
   ---Prem Subrahmanyam

turk@Apple.COM (Ken "Turk" Turkowski) (09/25/89)

In article <9119@pyr.gatech.EDU> ccoprrm@pyr.gatech.edu (ROBERT E. MINSK) writes:
>  I am currently trying to add texture mapping to my ray tracer. The problem
>I  am  having  is  texture  mapping  and  normal  interpolation  to  convex
>quadrilaterals and triangles.  The situation is as follows:
>  Given 4 points forming a convex planer quadrilateral,the texture verticies
>associated with each point, and the intersection point on the quadrilateral,
>find the associated (u,v) mapping coordinates for the intersection point.
>For example:
>  4 points       the associated texture vertices
>  p1=(-5, 1, 2)  t1=(.2,.4)
>  p2=(-2,-3, 6)  t2=(.4,.2)
>  p3=( 2,-1, 4)  t3=(.8,.3)
>  p4=( 1, 4,-1)  t4=(.7,.5)
>intersection point = (-2,-1, 4)
>find the (u,v) mapping coordinates
>
>I assume a triangle will be the same mapping with two vertices sharing the
>same points and mapping coordinates.

For suitably well-behaved texture mappings (i.e. no bowtie
quadrilaterals), there is a projective mapping, that maps quadrilaterals
to quadrilaterals.  This is represented by a 3x3 matrix, unique up to a
scale factor.

	[x y z] = [uw vw w] [M]						(1)

where w is a homogeneous coordinate, and [M] is the 3x3 matrix.  Since
the 4 points must obey (1), we have the nonlinear system of equations:

	|x0 y0 z0|   |u0*w0 v0*w0 w0|
	|x1 y1 z1|   |u1*w1 v1*w1 w1|
	|x2 y2 z2| = |u2*w2 v2*w2 w2| [M]				(2)
	|x3 y3 z3|   |u3*w3 v3*w3 w3|

This represents 12 equations in 13 unknowns, but one of the w's may be
arbitrarily chosen as 1.

System (2) can be solved for the matrix [M] by any nonlinear system
equation solver, such as those available in the Collected Algorithms of
the ACM.

When this matrix is inverted, it gives the mapping you desire:

	                       -1
	[uw vw w] = [x y z] [M]						(3)

You just plug the desired [x y z] into (3), and divide by w.

For triangles, the mapping (1) is affine:

	[x y z] = [u v 1] [M]						(4)

and the resulting system is linear:

	|x0 y0 z0|   |u0 v0 1|
	|x1 y1 z1| = |u1 v1 1| [M]					(5)
	|x2 y2 z2|   |u2 v2 1|

This linear equation (9 equations in 9 unknowns) can easily be solved
by LU decomposition.

Note that this matrix computed in (2) and (5) depends only on the
parametrization, so its only needs to be computed once, offline, at the
time the texture is assigned.

Take note that the texture parameters are not interpolated linearly (or
bilinearly), but projectively.  Also note that, unlike standard
bilinear interpolation (such as that used for Gouraud interpolation)
this method of interpolation is rotation invariant.

For more information on projective mappings, see the the book
"Projective Geometry and Applications to Computer Graphics" by Penna
and Patterson.  (I hope this reference is correct -- I'm doing it from
memory).

For more detail on this approach to texture-mapping, including
texture-mapping by scan-conversion, request a copy of the following
paper:

Turkowski, Ken
The Differential geometry of Texture Mapping
Apple Technical Report No. 10
May 10, 1988

from

Apple Corporate Library
Apple Computer, Inc.
20525 Mariani Avenue
Mailstop 8-C
Cupertino, CA 95014

The e-mail address for the library is: corp.lib1@applelink.apple.com,
but the gateway is one-way, so don't expect an electronic reply.
-- 
Ken Turkowski @ Apple Computer, Inc., Cupertino, CA
Internet: turk@apple.com
Applelink: TURKOWSKI1
UUCP: sun!apple!turk

fvance@airgun.wg.waii.com (Frank Vance) (09/27/89)

In <170@vsserv.scri.fsu.edu> Prem Subrahmanyam wrote:
   I would strongly recommend obtaining copies of both DBW_Render and
   QRT, as both have very good texture mapping routines. 

I think this is an excellent suggestion, and, in fact, I would very much
like to get a copy of both.  Can some one tell me where I might find either
one?  Unfortunately, the usual ftp sources will not do at all, as I don't
work for a company which has any such connections.  That pretty much
limits me to anon uucp, mail servers, or kind strangers.
-- 
Frank Vance				fvance@airgun.wg.waii.com
Western Geophysical, Houston 		...!uunet!airgun!fvance

turk@Apple.COM (Ken "Turk" Turkowski) (10/02/89)

In article <4330@internal.Apple.COM> turk@Apple.COM (that's me) write:
>For suitably well-behaved texture mappings (i.e. no bowtie
>quadrilaterals), there is a projective mapping, that maps quadrilaterals
>to quadrilaterals.  This is represented by a 3x3 matrix, unique up to a
>scale factor.
>
>	[x y z] = [uw vw w] [M]						(1)

Sorry, this is not quite true.  The 3x3 matrix actually represents a
projective mapping for triangles or an affine mapping for
quadrilaterals.  For the general case of an arbitrary mapping of
quadrilateral to quadrilateral, one needs a 3x4 matrix, as pointed out
in Paul Heckbert's master's thesis ("Fundamentals of Texture Mapping
and Image Warping", p. 23):

	[x y z h] = [uw vw w] [M]

Although Paul does not show the method for computing the 3x4 matrix, he
does show how to calculate the 3x3 matrix for mapping between arbitrary
quadrilaterals in the plane.  His method is similar to that of Penna
and Patterson's, in which one can easily determine a projective mapping
(of any degree) from the mapping of the "ideal points" (i.e. points at
infinity).

Anyway, the end result is the same: to find the (u,v) coordinates of a
polygon, you just plug (x,y,z) into a matrix equation and divide by the
homogeneous coordinate.
-- 
Ken Turkowski @ Apple Computer, Inc., Cupertino, CA
Internet: turk@apple.com
Applelink: TURKOWSKI1
UUCP: sun!apple!turk

flitter@dtrc.dt.navy.mil (Flitter) (03/08/90)

   I am looking for information on how to do texture mapping, or at least
that's what I think I'mm looking for. I want to take a two dimensional graphic
(say a schematic) and map it onto an arbitrary (but regular) object such as a
cylinder, sphere or pyramid. Then I want to project the surface onto the
display area so I get a compressed image, sort of like peripheral vision. The
farther away from the center you get the more 'squished' the image.
   I believe this falls under texture mapping. I have a few articles I have
found but I have not been able to find any texts that really describe the
algorithms for doing this sort of mapping. If anyone could describe the
algorithms I need or give me references which would describe them I would
appreciate it. Thanks in advance.

	     Lance Flitter