[net.graphics] Possible way of anti-aliasing.

sdh@joevax.UUCP (The Doctor) (09/17/86)

I don't know a good algorithm for anti-aliasing, but you could try the
following technique:
Get a pixels attributes.  Average its attributes with the pixels
immediately tangent to it.  If the difference in attributes is above a
certain threshold, decide how greatly you want to change the pixel (could
be by a percentage of the difference in attributes.  If there is a change,
stack it, else nothing. Repeat until all pixels have been hit.  Go through
the stack and make the changes.

This should be very efficient on images with plenty of consistent background,
and not too many objects in the foreground.  It will use up more memory than
you'd really like on complex pictures.

Ways to eliminate this problem:

Forget the stack entirely and just put the image out to a file, or an
alternate graphics display.  This will have a memory consumption that will
be a constant, which is far smaller than the worst case of stacking.

Do separate passes for red, green and blue, thus reducing the data that has
to be stacked.

Don't do the whole screen in one pass.  Just do it in blocks of 3 scan lines
The reasoning behind this is that the fourth line down only needs to know
about line 3 and line 4, so it can forget about line 1 and 2.

I have no idea whether or not any of this is valid.  I started thinking about
anti-aliasing about 5 minutes ago, so this is just off the top of my head.
It seems to be a pretty brute force way of doing it, but swift.  If you
only stack 3 lines at a time, it will be very memory efficient too.

Steve Hawley
joevax!sdh

bobr@zeus.UUCP (Robert Reed) (09/18/86)

> I have no idea whether or not any of this is valid.  I started thinking
> about anti-aliasing about 5 minutes ago, so this is just off the top of my
> head.  It seems to be a pretty brute force way of doing it, but swift.  If
> you only stack 3 lines at a time, it will be very memory efficient too.
> 	Steve Hawley

There are several techniques for doing spacial anti-aliasing, but the
essential algorithm is to oversample the image and then filter it down to
the resolution of your display device.  For example, if you do four samples
per pixel over a uniform grid and simply average the four values together (a
technique known as box filtering), you will get a crude form of
anti-aliasing.  Improvements can be made by varying the sampling and
filtering techniques (e.g. stochastic sampling, triagular and gaussian
filtering, etc.).  Check any contemporary computer graphics text or look
through recent publications from SIGGRAPH for more details.  

This newsgroup should not be a forum for naive speculation.  There is enough
solid information about the subject that it shouldn't be necessary to shoot
from the hip

Robert Reed, Tektronix CAE Systems Division, bobr@zeus.TEK

apn@nonvon.UUCP (apn) (09/18/86)

In article <280@joevax.UUCP>, sdh@joevax.UUCP (The Doctor) writes:
> 
> stack it, else nothing. Repeat until all pixels have been hit.  Go through
> the stack and make the changes.
> 
> This should be very efficient on images with plenty of consistent background,
> and not too many objects in the foreground.  It will use up more memory than
> you'd really like on complex pictures.
> 
> Ways to eliminate this problem:
> 
> Forget the stack entirely and just put the image out to a file, or an
> alternate graphics display.  This will have a memory consumption that will
> be a constant, which is far smaller than the worst case of stacking.
> 
> Do separate passes for red, green and blue, thus reducing the data that has
> to be stacked.
> 
> Don't do the whole screen in one pass.  Just do it in blocks of 3 scan lines
> The reasoning behind this is that the fourth line down only needs to know
> about line 3 and line 4, so it can forget about line 1 and 2.
> 
> I have no idea whether or not any of this is valid.  I started thinking about
> anti-aliasing about 5 minutes ago, so this is just off the top of my head.
> It seems to be a pretty brute force way of doing it, but swift.  If you
> only stack 3 lines at a time, it will be very memory efficient too.
> 

	Awe, come on..... This should all be done in *HARDWARE*



-- 

	UUCP : ihnp4!ptsfa!gilbbs!apnsys!root

	VOICE:	+1 707 433 0202 (work)
		+1 707 575 9616 (home)
		+1 707 538 1175 (more work)

{* War does not determine who is right or wrong........ only who is left *}
{* Only those who attempt the absurd   ...   will achieve the impossible *}

vedm@hoqam.UUCP (BEATTIE) (09/19/86)

Why is this technique called "anti-aliasing"?

Tom.
...!{decvax | ucbvax}!ihnp4!hoqaa!twb

henry@utzoo.UUCP (Henry Spencer) (09/26/86)

> Why is this technique called "anti-aliasing"?

Any representation of a scene at a resolution too low to show all the
detail of the original (i.e. truly sharp edges on diagonal lines, which
no raster-scan device can do exactly) is in fact a representation of
infinitely many different scenes.  For example, a simplistic representation
of a sharp-edged diagonal line will have a stairstep appearance, and
could also represent a scene in which the edge really was a stairstep rather
than a straight line.  Hence "aliasing":  many scenes, one representation,
and no way to tell which scene just by looking at the representation.

The trouble is that the eye-brain combination will pick one of these scenes
when shown the representation, and it's not necessarily the one you want.
For example, that stairstep edge really will be seen as a stairstep unless
the steps are *really* small:  "jaggies".  For another example, small
contrasty details (e.g. streetlamps at a distance at night) will show or
not show depending on whether the sampling process happens to hit them
dead-on or not, which makes for ragged-looking images.  This problem gets
*much* worse if the image is moving, because then such details blink on
and off from one frame to the next, as the sampling grid moves:  "scin-
tillation".  Areas with lots of regularly-spaced small contrasty details
will show spurious patterns as the detail grid and the sampling grid
"beat" against each other:  "Moire patterns".  And so forth.

Note that trivial techniques like local averaging of the representation
will help jaggies somewhat but *won't* fix things like scintillation.
To do "anti-aliasing" properly, it is necessary to recognize that each
element of the representation represents a finite area of the scene, and
its color/brightness/whatever should reflect, to some degree, the color/etc.
of *everything* that is within that area of the scene, not just whatever
happens to be at the exact center of the area.  Doing this well isn't easy;
doing it well without consuming inordinate amounts of computing time is
extremely hard.  See any good graphics textbook for more discussion of the
basics; check out Siggraph proceedings for current research work.


Be warned that the above contains oversimplifications; the reality is
even worse.  Be warned also that I am not an expert in this area; consult
someone who *is* before spending lots of money or planning your PhD thesis.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

falk@sun.uucp (Ed Falk) (09/28/86)

> Why is this technique called "anti-aliasing"?
> 
Mini-lecture:



It comes from data analysis.  Suppose I am sampling an electrical signal
digitally.  I can't make a continuous sample, so I settle for taking sample
points at certain intervals, say 100hz.  Now, suppose there's a 101 hz sine
wave coming in.  My sample points look like this:

  **        **        *         *         *         *         *         *      
 *  *      *  *      * *       * *       * *       * *       * *       * *     
 *  *      *  *      * *      *   *     *   *     *   *     *   *     *   *    
*   *     *   *     *   *     *   *     *   *     X   *     X   *     X   *    
*    *    *    *    X   *     X   *     X   *     *   *     *   *     *   *    
X----*----X----*----*---*-----*----*----*----*----*---*-----*---*-----*---*----
     *    *    *    *    *   *     *   *     *   *    *    *     *   *     *   
      *  *      *  *     *   *     *   *     *   *     *   *     *   *     *   
      *  *      *  *     *   *     *   *     *   *     *   *     *   *     *   
      *  *      *  *      * *       * *       * *       * *       * *       * *
       **        **        *         *         *         *         *         * 

where the "X"s represent the actual samples I take.

If I only look at the points I sampled, I see


                                                                               
                                                                               
                                                                               
                                                  X         X         X        
                    X         X         X                                      
X---------X--------------------------------------------------------------------






As you can see, this masquerades ("aliases") as a much lower frequency
(1 hz to be precise).  In order to avoid this effect, you have to sample
at at least twice the highest frequency you will be recording, or put
a low-pass filter in front of your recorder.
You get almost identical results sampling a 99hz input.

It turns out that this happens not just for signals over time, but for
signals over space.  An image is nothing more than signals over space in
two dimensions.  An extreme case of this effect is the "picket fence"
problem:  Suppose you render an image of a picket fence in the distance.  The
fence has pickets whose spatial frequency (pickets/inch) is very close to
the image pixel resolution (pixels/inch).  For example, if your pixel
resolution is 100 pixels/inch, and I have an image with 101 pickets per inch,
I'll get the effect of one picket per inch on the screen.

These are extreme examples, but you get the idea.  More typical examples are
polygons or lines with sharp edges.  The visual effect is of "jaggies" along
the sharp edge.  In image rendering, a sharp edges is just like the leading
(or trailing) edges of a square wave or pulse in a signal over time -- it's
just loaded with high frequencies.

The solution for these 2-d spatial aliasing artifacts is exactly the same
as for 1-d temporal aliases:  take more samples or filter the input.

Taking more samples is conceptually easy, just get a higher resolution
display and compute more pixels.  But this is expensive.

Since the image never had an analog stage, you can't actually filter the
input, but you can fake it in various ways.  The typical "anti-aliasing"
technique is to take higher-resoution samples in the neighborhood of
a high-frequency edge and average them together to do a sort of digital
filtering on the affective pixels.  Whole books have been written
(or at least ought to be) on how to do the digital filtering.
-- 
		-ed falk, sun microsystems
			falk@sun.com
			sun!falk

ph@pixar.UUCP (Paul Heckbert) (10/03/86)

    [This is a repeat of my posting of 21 Sept, which apparently died
     before it reached most of you; too bad the net's so flaky!]

In article <323@axiom.UUCP>, gts@axiom.UUCP (Guy Schafer) writes:
> How is anti-aliasing done?
> Just averaging the (color, intensity, brightness?) of the two cells adjacent
> to any object boundary sounds good but there are a few problems ...

First a nutshell explanation of the cause of aliasing.
It results because a continuous image (either natural or synthetic) is being
sampled on a discrete grid of pixel locations.  Signal theory tells us that
when a continuous image is sampled, frequencies higher than half the
sampling rate will look like ("alias") as lower frequencies.  A simple
example of temporal (rather than spatial) aliasing is the "wagon wheel effect"
seen on film and TV.

To eliminate aliasing, one can either

    a) increase the sampling rate to twice the highest frequency in
       the image (go to higher resolution)
or  b) low pass filter (blur) the image BEFORE SAMPLING to eliminate
       unreproducible high frequencies

The method you suggested, filtering after sampling, is a third possibility,
but this approach gets very clumsy because of the difficulty
of inferring the true edge locations after sampling has been done.
It's the method used at NYIT [Jules Bloomenthal, `Edge Inference with
Applications to Antialiasing', SIGGRAPH '83].  This method is conceptually
flawed so I wouldn't recommend it (I used to work there, so I know!).

Using approach (a) above you either point sample at high resolution
(say 2000 or 4000 line) and output at that high resolution, which is pretty
much what Digital Productions does, or sample at high resolution and average
the image down to medium resolution (say 500 line), which is what most folks
in the TV graphics business do, if they do antialiasing at all.

The theoretically "correct" approach is (b), however: analytic antialiasing.
Algorithms have been developed for simple objects such as lines and
polygons, but analytic antialiasing of curved surfaces is an open problem.

Of course, the ultimate place to learn about computer graphics is not USENET,
but your local library.  I recommend:

    Frank Crow, `The Aliasing Problem in Computer-Generated Shaded Images',
    Communications of the ACM, Nov. 1977

    Eliot Feibush, Marc Levoy, Robert Cook,
    `Synthetic Texturing Using Digital Filters', SIGGRAPH '80

    any recent ACM SIGGRAPH proceedings

    David F. Rogers, `Procedural Elements for Computer Graphics',
    McGraw-Hill, 1985  ( contains pseudocode for antialiasing lines )

The latter is the only book on image synthesis I know of; I highly recommend it.
With any antialiasing method it is essential that you properly compensate for
intensity nonlinearities in your CRT or film recorder or your attempts
at antialiasing will be disappointing.  Rogers' book discusses this
`gamma correction' problem.

Paul Heckbert
Pixar				415-499-3600
P.O. Box 13719			UUCP: {sun,ucbvax}!pixar!ph
San Rafael, CA 94913		ARPA: ph%pixar.uucp@ucbvax.berkeley.edu

williams@vu-vlsi.UUCP (Thomas Williams) (10/06/86)

>Of course, the ultimate place to learn about computer graphics is not USENET,
>but your local library.  I recommend:
>
>    Frank Crow, `The Aliasing Problem in Computer-Generated Shaded Images',
>    Communications of the ACM, Nov. 1977
>
>    any recent ACM SIGGRAPH proceedings
>
>	 [ other worthwhile references ]

    The problem is that public libraries (especially local, but even 
metropolitan) are woefully deviod of these references.  I used to spend 
lots of time using the Philadelphia Free Library's card-catalog computers 
to no avail.  The only place I've been able to find any good papers is in 
company libraries, my ACM subscriptions, or occasionally in a university's 
library.  In short, it's a real pain in the ass to find something which
wasn't mailed to me.   * Where should I be looking?! *


                                  -thomas williams

ps:  Most of the literature I'm looking for is contained in papers already
published by the ACM and the rest in books, so it's not THAT obscure.

UUCP: {cbmvax,psuvax,pyrnj}!vu-vlsi!williams

ken@turtlevax.UUCP (Ken "Turk" Turkowski) (10/07/86)

For an analytic derivation of antialiasing for lines or polygons
given radially-symmetric point-spread function (PSF), look at:

	Anti-Aliasing through the use of Coordinate Transformations
	Turkowski, Kenneth
	ACM Transactions on Graphics
	Vol. 1, No. 3, July 1982, pp. 214-234

This outlines the transformation of the radially symmetric point-spread
function into another function which is the result of convolving the
line or polygon edge with the PSF.  One simply calculates the distance
of a pixel center from the edge, and uses that as an index into a table
which contains the proper value to put into the frame store.

The distance of successive pixels near an edge can be computed incrementally
through a linear coordinate transformation:
	du = a dx + b dy
where (b, -a) is a unit vector pointing from one vertex to the other.
If dx is 0, then a unit change in y (dy=1) produces a change in distance
of a (du=a).  This is just a normalized Bresenham algorithm.
-- 
Ken Turkowski @ Apple Computer, Cupertino, CA
UUCP:	nsc!apple!turk, {amd,decwrl,hplabs,seismo}!turtlevax!ken
ARPA:	turk%apple@CSNET-RELAY.ARPA, turtlevax!ken@DECWRL.DEC.COM
CSNET:	turk@apple.CSNET

ken@turtlevax.UUCP (Ken "Turk" Turkowski) (10/07/86)

In article <109@pixar.UUCP> ph@pixar.UUCP (Paul Heckbert) writes:
>The theoretically "correct" approach is (b), however: analytic antialiasing.
>Algorithms have been developed for simple objects such as lines and
>polygons, but analytic antialiasing of curved surfaces is an open problem.

Anti-aliasing of circles is relatively easy: all you need to do is compute
the distance from a pixel to the center of the circle,
and subtract this from the radius of the circle.  This value is then used
to index into the table used for lines, in the previously referenced paper
(in an acompanying USENET article).

An incremental method for calculating the distance to a circle is given in
the enclosed text (I call it circle.e).
	eqn circle.e | itroff -ms
to read it.  It requires a couple of additions and a division at each step.


-----------------------------------------------------------------
.TL
Anti-Aliased Circle Drawing
.AU
Ken Turkowski
.SH
Introduction and Prior Art
.PP
The problem we here pose is how to calculate anti-aliased circles.
From our familiarity with lines,
we feel that a possible key to this problem
is the distance from the circle to an arbitrary point.
.PP
One obvious way to do this is to calculate the Euclidean distance
of an arbitrary point to the center of the circle,
ans subtract from this the radius of the circle.
This involves two multiplications and a square root at each point we visit,
giving us a calculation time of approximately 4 multiplications.
.PP
Additionally, we would like to calculate the arc length
so that we can draw dotted and dashed lines.
This can be determined simply by calculating the angle difference between
two points on the circle and multiplying by the radius.
.PP
Using CORDIC iterations,
we could convert from rectangular to polar in the time of 5.5 multiplications;
arc length still requires another multiplication,
resulting in 6.5 multiplications per point visited.
.PP
This still requires considerable computation,
so we search for a way to do the computations incrementally.
The equations for radius and arc length are given with the folowing integral
matrix equation:
.EQ I (1)
delim @@
int from C r ^ left [ pile { dr above dl } right ] =
int from C r ^ left [
matrix {
ccol { x above -y }
ccol { y above x }
}
right ] ^
left [ pile { dx above dy } right ]
.EN
.LP
Where @ r @ is the distance from the center of the circle,
@ l @ is the arc length from some point on the circle,
and @ C @ is an arbitrary curve of integration
from @ ( x sub 0 , ^ y sub 0 ) @
to @ ( x sub 1 , ^ y sub 1 ) @,
where @ r @ and @ l @ also change,
from @ ( r sub 0 , ^ l sub 0 ) @
to @ ( r sub 1 , ^ l sub 1 ) @.
The integrand is an exact differential,
so the actual curve used for integration is not important.
.PP
We shall use this equation to derive incremental changes
from differential changes.
.SH
Incremental Radius
.PP
Let us first look at the incremental radius:
.EQ I (2)
int from C r ^ dr = int from C x ^ dx ~ + ~ y ^ dy
.EN
.LP
Integrating on both sides,
we have:
.EQ I (3)
size -3 { 1 over 2 } ^ left "" r sup 2 right ]
from { r sub 0 } to { r sub 1 } ~ = ~
size -3 { 1 over 2 } ^ left "" x sup 2 right ]
from { x sub 0 } to { x sub 1 } ~ + ~
size -3 { 1 over 2 } ^ left "" y sup 2 right ]
from { y sub 0 } to { y sub 1 }
.EN
.EQ I
 size -3 { 1 over 2 } left [ r sub 1 sup 2 ^ - ^ r sub 0 sup 2 right ] ~ = ~
 size -3 { 1 over 2 } left [ x sub 1 sup 2 ^ - ^ x sub 0 sup 2 ~ + ~
y sub 1 sup 2 ^ - ^ y sub 0 sup 2 right ]
.EN
.EQ I
 size -3 { 1 over 2 } ^
 ( r sub 1 ^ + ^ r sub 0 ) ( r sub 1 ^ - ^ r sub 0 ) ~ = ~
 size -3 { 1 over 2 } ^
 left [ ( x sub 1 ^ + ^ x sub 0 ) ( x sub 1 ^ - ^ x sub 0 ) ~ + ~
( y sub 1 ^ + ^ y sub 0 ) ( y sub 1 ^ - ^ y sub 0 ) right ]
.EN
.EQ I
 size -3 { 1 over 2 } ^ ( 2 r sub 0 ^ + ^ DELTA r ) DELTA r ~ = ~
 size -3 { 1 over 2 } ^ left [ ( 2 x sub 0 ^ + ^ DELTA x ) DELTA x ~ + ~
( 2 y sub 0 ^ + ^ DELTA y ) DELTA y right ]
.EN
.EQ I
DELTA r ^ ( r sub 0 ^ + ^ { DELTA r } over 2 ) ~ = ~
{ DELTA x } ( x sub 0 ^ + ^ { DELTA x } over 2 ) ~ + ~
{ DELTA y } ( y sub 0 ^ + ^ { DELTA y } over 2 )
.EN
.LP
Therefore, the resultant change in radius
for an incremental change in @ x @ and @ y @ is:
.EQ I (4)
DELTA r ~ = ~
{
DELTA x ( x sub 0 ^ + ^ { DELTA x } over 2 ) ~ + ~
DELTA y ( y sub 0 ^ + ^ { DELTA y } over 2 )
} over { r sub 0 ^ + ^ { DELTA r } over 2 }
.EN
.PP
If the increments in x and y are @ +- 1 @ or @ 0 @,
and if @ r sub c ^ >> ^ 1 @
(where @ r sub c @ is the radius of the circle),
then its doesn't much matter what we divide by,
as long as it is in the interval @ [ r sub 0 , ~ r sub 1 ] @.
.PP
However, for small circles, @ DELTA r @ is on the order of @ r sub c @,
the radius of the circle.
For these, we must increase our accuracy of the calculations.
This can be done rather easily by iteration as follows:
Make a first calculation for @ DELTA r @ using eq. (7).
Then use the computed value for @ DELTA r @ in the same equation,
generating a second @ DELTA r @.
This can be continued until the successive iterates converge,
or for a fixed number of iterations.
Two good starting values for @ DELTA r @
are @ 0 @ and the last calculated value for @ DELTA r @.
.SH
Incremental Arc Length
.PP
We start out with the integral equation for arc length:
.EQ I (5)
int from C r ^ dl = int from C -y ^ dx ~ + ~ x ^ dy
.EN
.LP
We choose to integrate along the straight line
between @ ( x sub 0 , ^ y sub 0 ) @
and @ ( x sub 1 , ^ y sub 1 ) @.
Since @ r @ and @ l @ are both change during the integration,
but not in a simple way,
we invoke the
.I "mean value theorem"
on the left-hand side.
.EQ I (6)
r sub m DELTA l ~ mark = ~ - int from { x sub 0 } to { x sub 1 }
left [ y sub 0 ^ + ^
{ DELTA y } over { DELTA x } ( x ^ - ^ x sub 0 ) right ] dx
~ + ~ int from { y sub 0 } to { y sub 1 }
left [ x sub 0 ^ + ^
{ DELTA x } over { DELTA y } ( y ^ - ^ y sub 0 ) right ] dy
.EN
.EQ I
lineup = ~
- y sub 0 DELTA x
~ - ~  size -3 { 1 over 2 } { DELTA y } over { DELTA x } DELTA x sup 2
~ + ~ x sub 0 DELTA y
~ + ~  size -3 { 1 over 2 } { DELTA x } over { DELTA y } DELTA y sup 2
.EN
.EQ I
lineup = ~
x sub 0 ^ DELTA y ~ - ~ y sub 0 ^ DELTA x
.EN
.LP
This gives a value for the incremental arc length:
.EQ I (7)
DELTA l ~ = ~
{ x sub 0 ^ DELTA y ^ - ^ y sub 0 ^ DELTA x } over r sub m
.EN
.LP
Where @ r sub m @ lies somewhere between @ r sub 0 @ and @ r sub 1 @.
A good value to use for @ r sub m @ is the value
@ r sub 0 ^ + ^ { DELTA r } over 2 @ computed above in eq. (4).
-- 
Ken Turkowski @ Apple Computer, Cupertino, CA
UUCP:	nsc!apple!turk, {amd,decwrl,hplabs,seismo}!turtlevax!ken
ARPA:	turk%apple@CSNET-RELAY.ARPA, turtlevax!ken@DECWRL.DEC.COM
CSNET:	turk@apple.CSNET

jim@riacs.ARPA (Jim Houston) (10/08/86)

> {cbmvax,psuvax,pyrnj}!vu-vlsi!williams writes
>     The problem is that public libraries (especially local, but even 
> metropolitan) are woefully deviod of these references.  I used to spend 
> lots of time using the Philadelphia Free Library's card-catalog computers 
> to no avail.  The only place I've been able to find any good papers is in 
> company libraries, my ACM subscriptions, or occasionally in a university's 
> library. ....

    Funny thing, I was at the Phila. Library Book Sale one week after SIGGRAPH '83
    and bought a brand-new copy of those proceedings for $1.50. Obviously, they
    didn't know what they were selling. 

    I've also noticed the University libraries in Phila. have very few issues of
    the Siggraph proceedings. (Though they often have complete sets of BYTE
    magazine.)
	
    One effective method for finding papers is to find people with an interest
    in graphics, and copy the papers you are interested in. Local SIGGRAPH's
    are good for this.

rogerh@megaron.UUCP (10/08/86)

Ken Turkowski posts a way to analytically anti-alias circles (thanks!),
but claims that this shows that it's possible to anti-alias "curved
surfaces".  Unfortunately, flat circles are different from curved
surfaces.  To anti-alias the rendition of a curved surface properly,
one must anti-alias any sharp-edged color changes; for example, the
edge of a highlight.  To do this analytically requires an analytic (and
tractable) description of the projection of the colored area.  I think
that this is not, in general, computable -- think of a painted scene
reflected by a cubic patch.  It's as hard as differentiating the
luminance function.

steve@bambi.UUCP (Steve Miller) (10/12/86)

> Ken Turkowski posts a way to analytically anti-alias circles (thanks!),
> but claims that this shows that it's possible to anti-alias "curved
> surfaces".  Unfortunately, flat circles are different from curved
> surfaces.  To anti-alias the rendition of a curved surface properly,
> one must anti-alias any sharp-edged color changes; for example, the
> edge of a highlight.

A good reference here is "Pyramidal Parametrics," by Lance Williams,
Siggraph Proceedings, 1983.  Basically, Lance interpolates between pairs
of pre-filtered images of spheres, where the images include highlights.

Anyone planning to implement this approach should also read "Summed-Area
Tables for Texture Mapping," by Franklin Crow, Siggraph Proceedings 1984.

	-Steve Miller ihnp4!bellcore!bambi!steve

turk@apple.UUCP (Ken "Turk" Turkowski) (10/13/86)

In article <1229@megaron.UUCP> rogerh@arizona.edu (Roger Hayes) writes:
>Ken Turkowski posts a way to analytically anti-alias circles (thanks!),
>but claims that this shows that it's possible to anti-alias "curved
>surfaces".

I didn't mean to imply this at all.  I'm sorry if I did.   The posting was meant
to illustrate an analytical method of anti-aliasing circles.  Curved surfaces
are another problem altogether.
-- 
Ken Turkowski @ Apple Computer, Inc., Cupertino, CA
UUCP: {sun,nsc}!apple!turk
CSNET: turk@Apple.CSNET
ARPA: turk%Apple@csnet-relay.ARPA

ph@pixar.UUCP (Paul Heckbert) (10/16/86)

Ken Turkowski suggests that arbitrary kernels could be used on an image pyramid,
and asks if one is constrained to box filters for summed area tables.

I've been thinking along similar lines myself.  Viewed abstractly, we can
regard the image pyramid, summed area table, or other texture representation
as a data structure, and regard filtering requests ("please give me a filtered
sample of this region") as accesses or queries of that database.
People typically use a trilinear interpolation access function on pyramids
and a rectangle corner access function on summed area tables,
but we're free too choose others.

Since the original image can be recovered from either of these data structures,
they have the same information content, so any conceivable weighted average of
pixels in the texture image is computable with either.  They are at their
best, however, when filtering square and rectangular areas, respectively.
The trick is to find combinations of data structure and access function which
give the quality and speed that you want.

Ken Perlin discovered a generalization of summed area tables which admits
triangular and higher order filter convolutions.  I described it in:
    Paul Heckbert, "Filtering by Repeated Integration", SIGGRAPH 86, pp. 317-321
Also watch for a paper on this topic by Leonard Ferrari et al. in the journal
"Computer Vision, Graphics, and Image Processing".

And I discuss the arbitrary-filters-on-a-pyramid idea in the upcoming:
    "Survey of Texture Mapping", IEEE Computer Graphics and Appl., Nov. 1986

Inventing new data structures and access functions for texturing
is fun and easy - ya'll should try it!
----
By the way, I'm collecting a list of Unsolved Problems in Computer Graphics.
Please mail me (DON'T POST!) your suggestions.

Paul Heckbert
Pixar				415-499-3600
P.O. Box 13719			UUCP: {sun,ucbvax}!pixar!ph
San Rafael, CA 94913		ARPA: ph%pixar.uucp@ucbvax.berkeley.edu

turk@apple.UUCP@ndmce.uucp (Ken "Turk" Turkowski) (10/18/86)

In article <551@bambi.UUCP> steve@bambi.UUCP (Steve Miller) writes:
>A good reference here is "Pyramidal Parametrics," by Lance Williams,
>Siggraph Proceedings, 1983.  Basically, Lance interpolates between pairs
>of pre-filtered images of spheres, where the images include highlights.
>
>Anyone planning to implement this approach should also read "Summed-Area
>Tables for Texture Mapping," by Franklin Crow, Siggraph Proceedings 1984.

It seems as though one could use arbitrary kernels in the pyramidal
parametrics (mip-maps), but you are constrained to box filters for
summed-area tables.  Am I correct?

If the sequence images were filtered with ideal low-pass filters, then
the linearity of the Fourier transform implies that the frequency
response of the interpolated image would be:

 |H(w)| ^
	|-------------------+	-   -	-   -	+	<--  Height of second
	|		    |				     step determined
	|		    |	-   -	-   -	+	<--  by interpolation
	|		    |				     parameter blending
	|		    +-------------------+	<--  between adjacent
	|					|	     mip-maps.
	|		    +	-   -	-   -	|	<--
	|					|
	|---------------------------------------+---------------> w

Of course, we don't have ideal low-pass filters, but the diagram illustrates
the effect in the frequency domain, of interpolation between adjacent mip-maps.
-- 
Ken Turkowski @ Apple Computer, Inc., Cupertino, CA
UUCP: {sun,nsc}!apple!turk
CSNET: turk@Apple.CSNET
ARPA: turk%Apple@csnet-relay.ARPA