[comp.graphics] Depth Cueing

hollasch@ENUXHA.EAS.ASU.EDU (Steve Hollasch) (03/14/91)

    As part of my master's thesis, I'm working on a wireframe display
program.  The program already projects everything down to screen
coordinates.  I recently came up with a nifty visualization idea for the
wireframe by depthcueing the edges, but NOT based on vertex depth.  I've
tried to come up with a way to fake out my display device (SGI 3130), but
couldn't find an obvious solution.

    So right now I've decided to kludge depthcueing myself by breaking up
each edge into some number of line segments.  I have a good idea about how
to do this, but in thinking about depthcueing I realized that it's not as
simple as it first appears.

    My original thinking was that you determine the distance from the
viewpoint to each of the vertices, determine the intensity of each vertex
(corresponding to the vertex-viewpoint distance), and then linearly
interpolate the intensity across the edge.

    However, that approach has at least one pathological case.  Consider
the two line segments in the following diagram, where v is the viewpoint:


                                a--------b



             c-------------------------------------------------d

                                     
                                    v

                            __     __                       __     __
    Note that the distances vc and vd are both greater than va and vb.  In
this diagram I've made the vertices of each edge equidistant from the
viewpoint for simplicity.  The viewing angle IS a bit large, but again the
exaggeration is just for illustrative purposes.

    Now with the simplistic depthcueing method I've outlined above,
edge cd will be rendered as farther away than edge ab, just the opposite
as what we'd like to convey.

    So it seems that to properly depthcue edges, you'd have to sample the
line and determine the viewpoint distance from each point along the line
(that corresponds to a scanned pixel), which seems to be quite expensive
computationally.

    Another thought I had was that you could determine the point on the
line containing an edge that is nearest the viewpoint.  If the point is on
the edge, then do two linear interpolations of between the close point and
the two endpoints of the edge.  If the point is off the edge, then just do
a linear interpolation between the two endpoints.  This approach isn't
quite accurate, though, because the change in distance to the viewpoint is
not linearly related to distance along the edge (though it's probably good
enough for approximation).

    So which method do most display devices really use?  Is there another
approach I've overlooked?

______________________________________________________________________________
Steve Hollasch                /      Arizona State University (Tempe, Arizona)
hollasch@enuxha.eas.asu.edu  /  uunet!mimsy!oddjob!noao!asuvax!enuxha!hollasch

coy@ssc-vax (Stephen B Coy) (03/15/91)

In article <9103132039.AA00198@enuxha.eas.asu.edu> hollasch@enuxha.eas.asu.edu (Steve Hollasch) writes:
>a linear interpolation between the two endpoints.  This approach isn't
>quite accurate, though, because the change in distance to the viewpoint is
>not linearly related to distance along the edge (though it's probably good
>enough for approximation).

The relationship between screen space and depth is inversely
proportional.  What you want to do is calculate 1/depth at each
vertex and linearly interpolate.  This also appears when doing a
z-buffer type renderer.

stephen coy
coy@ssc-vax.UUCP

				BDIF