[comp.graphics] Z-Buffer Edge effects

arthur@warwick.UUCP (John Vaudin) (09/01/89)

I have been playing with a simple polygon renderer which I wrote to
learn about how these things work. I have been trying to implement a
Z-buffer alogorithm and I have some problems with edge effects.

If I render a cube from an isometric view then the edges of the
back faces are visible. This is because the Z value for the pixels in
the edge of the front face and the edge of the back face are equal so the
visibility of the edge pixels is determined by the order in which the
drawn.

I do not understand how to get round this problem. If two pixels have
the same Z value what do you do ? How do real systems get round this ?

Thanx in advance

John Vaudin                arthur@flame.warwick.ac.uk

pepke@loligo (Eric Pepke) (09/01/89)

In article <1972@diamond.warwick.ac.uk> arthur@flame.warwick.ac.uk (John Vaudin) writes:
>If I render a cube from an isometric view then the edges of the
>back faces are visible. This is because the Z value for the pixels in
>the edge of the front face and the edge of the back face are equal so the
>visibility of the edge pixels is determined by the order in which the
>drawn.
>
>I do not understand how to get round this problem. If two pixels have
>the same Z value what do you do ? How do real systems get round this ?

I don't know about all real systems, but most of the ones I have seen get
around it by not documenting the problem and letting you find out after
you buy the machine.  :-)

For the particular problem that you describe, however, when the polygons 
share vertices, there is a relatively simple solution: don't draw the edges.
Use the Macintosh model where lines around polygons are considered infinitely
thin, and only render the pixels that are honest-to-Chthulu in the interior
of the polygon.  You can pretty easily whip together a Bresenham system that
does this in integral screen coordinates and guarantees that adjacent polygons
will never overlap or leave gaps.

If you do this, the normal case of filled abutting polygons will work quite
nicely.  If the user really wants to mess up the system by having objects 
that are coincident in 3-space and of different colors, well, life's rough.

Now, if you really want to have fun, precompute the Bresenham error terms
based on subpixel values of the transformed coordinates.

Eric Pepke                                     INTERNET: pepke@gw.scri.fsu.edu
Supercomputer Computations Research Institute  MFENET:   pepke@fsu
Florida State University                       SPAN:     scri::pepke
Tallahassee, FL 32306-4052                     BITNET:   pepke@fsu

Disclaimer: My employers seldom even LISTEN to my opinions.
Meta-disclaimer: Any society that needs disclaimers has too many lawyers.

srnelson%nelsun@Sun.COM (Scott R. Nelson) (09/02/89)

From article <1972@diamond.warwick.ac.uk>, by arthur@warwick.UUCP (John Vaudin):
> I have been playing with a simple polygon renderer which I wrote to
> learn about how these things work. I have been trying to implement a
> Z-buffer algorithm and I have some problems with edge effects.
> 
> If I render a cube from an isometric view then the edges of the
> back faces are visible. This is because the Z value for the pixels in
> the edge of the front face and the edge of the back face are equal so the
> visibility of the edge pixels is determined by the order in which the
> drawn.
> 
> I do not understand how to get round this problem. If two pixels have
> the same Z value what do you do ? How do real systems get round this ?

Due to the limitations of accuracy of floating-point numbers, even
the most correct sampling methods cannot guarantee that two pixels
along an edge don't end up with the same Z value.  You can change
the rules of your Z-buffer so that newest wins or oldest wins, but
you will still have some pixels showing through on an edge
occasionally unless you also sort the polygons.  But sorting defeats
the purpose of the Z-buffer.

Real systems get around this problem by using backface elimination
for all closed objects.  The strongest reason to use backface
elimination (in my opinion) is to get rid of the pixels that
occasionally show through, not for the speed advantage you get
through not having to draw all of the back facing polygons.

---

Scott R. Nelson
Sun Microsystems

arch_ems@gsbacd.uchicago.edu (09/02/89)

>Real systems get around this problem by using backface elimination
>for all closed objects.  The strongest reason to use backface
>elimination (in my opinion) is to get rid of the pixels that
>occasionally show through, not for the speed advantage you get
>through not having to draw all of the back facing polygons.

Was this last point a smiley?  Isn't there a significant speed
advantage to not having to draw all of the back facing polygons?

--Ted

Edward Shelton, Project Manager
ARCH Development Corporation
arch_ems@gsbacd.uchicago.edu

jdchrist@watcgl.waterloo.edu (Dan Christensen) (09/03/89)

In article <5270@tank.uchicago.edu> arch_ems@gsbacd.uchicago.edu writes:
|>Real systems get around this problem by using backface elimination
|>for all closed objects.  The strongest reason to use backface
|>elimination (in my opinion) is to get rid of the pixels that
|>occasionally show through, not for the speed advantage you get
|>through not having to draw all of the back facing polygons.
|
|Was this last point a smiley?  Isn't there a significant speed
|advantage to not having to draw all of the back facing polygons?

On our Iris 4D/120GTX drawing is slightly faster with backface removal
turned *off* if the polygons are small enough.  But for larger polygons
backface removal definitely speeds things up.

----
Dan Christensen, Computer Graphics Lab,	         jdchrist@watcgl.uwaterloo.ca
University of Waterloo, Waterloo, Ont.	         jdchrist@watcgl.waterloo.edu