[comp.graphics] checking whether 2 2D polygons overlap

hugo@gem.stack.urc.tue.nl (Hugo Lyppens) (12/04/90)

I've partly implemented the Newell-Newell-Sancha hidden surface algorithm on the Amiga for a real-time 3D animation package I've written. This algorithm is also known as the Painter's algorithm, for it establishes a priority list and draws polygons from the back to the front.

The program allows scenes composed of planar polygons which have no holes. I 
don't however get perfect results because there's one step in the algorithm I
 don't know how to implement: to check if the projections of two polygons on 
the screen overlap or not. I haven't been able to find an article on this 
subject, not even in articles from the authors of the painter's algorithm
themselves!

So please could anyone help me with some hints or source code to determine QUICKLY whether two 2D polygons overlap or not? Then I'll be able to implement the algorithm completely.

Please E-mail your responses to

hugo@stack.urc.tue.nl

Any help is greatly appreciated!

jcs@crash.cts.com (John Schultz) (12/05/90)

In <111@gem.stack.urc.tue.nl> hugo@gem.stack.urc.tue.nl (Hugo Lyppens) writes:


>I've partly implemented the Newell-Newell-Sancha hidden surface algorithm on the Amiga for a real-time 3D animation package I've written. This algorithm is also known as the Painter's algorithm, for it establishes a priority list and draws polygons from the back to the front.

>The program allows scenes composed of planar polygons which have no holes. I 
>don't however get perfect results because there's one step in the algorithm I
> don't know how to implement: to check if the projections of two polygons on 
>the screen overlap or not. I haven't been able to find an article on this 
>subject, not even in articles from the authors of the painter's algorithm
>themselves!

>So please could anyone help me with some hints or source code to determine QUICKLY whether two 2D polygons overlap or not? Then I'll be able to implement the algorithm completely.

  Two steps: Check to see if their extents overlap (minX,maxX,minY,maxY of
the polygon). This step requires only boolean comparisons. You can compute
the extents in your 2D clipping code. If you're in for speed, this step
may be all you need.
  The next step, if the extents overlap, is to find out if the polygons
actually overlap. One way is to use a clipping algorithm to clip one
polygon against another (Instead of clipping against a rectangle, clip
against a polygon. Most screen clippers are just special case clippers
for rectangles). You might also be able to analyze the relationship
between the area of the extent of *both* polygons and the areas of each
polygon within that extent (overlapping polygons will have less area than
non-overlapping polygons).


  John
 

rodent@netcom.UUCP (Richard Noah) (12/05/90)

hugo@gem.stack.urc.tue.nl (Hugo Lyppens) writes:
[implementing newell-newell-sancha on an Amiga]
>the screen overlap or not. I haven't been able to find an article on this 
>subject, not even in articles from the authors of the painter's algorithm
>themselves!

I did this a while ago, using Foley-VanDam.  The 2D-overlap algo is either
in the same book, or another common reference, because I didn't have
trouble finding it.  The only problem is, this algorithm DOESN'T WORK
CONSISTENTLY.  Even if you limit yourself to scenes composed of the simplest
non-intersecting triangles, newell-newell-sancha algo will occasionally
be unable to depth-sort them correctly.

Is there an algorithm that always works?  If there is, it's a closely
gaurded secret of the high priests of computer graphics.  I've tried for
years to find an algorithm that will work in all cases, spending months
coding, testing, and ending up in defeat.

I went to Computer Literacy (for those unknowing, it's a VERY complete
bookstore) and looked in EVERY computer graphics text they had.
Many didn't even mention hidden-line algos, some mentioned them but didn't
elaborate, and several only had elaborate rendering techniques useless
for real-time.

Only two books (everybody knows which) had a depth-sort algorithm in
detail, and they both described the (oh no!) Newell-Newell-Sancha algo.

If anyone could mention/describe/point to a SINGLE ALGORITHM ANYWHERE
that depth-sorts correctly, I would be indebted forever.  Would one of
the high priests reading this group care to reveal this secret to the
masses?

---------------------------------
Ben Discoe, caltech escapee and frustrated visionary at large. 

bakke@plains.NoDak.edu (Jeffrey P. Bakke) (12/05/90)

In article <18082@netcom.UUCP> rodent@netcom.UUCP (Richard Noah) writes:
> hugo@gem.stack.urc.tue.nl (Hugo Lyppens) writes:
> [implementing newell-newell-sancha on an Amiga]
> >the screen overlap or not. I haven't been able to find an article on this 
> >subject, not even in articles from the authors of the painter's algorithm
> >themselves!
> 
[ Etc ... ]
> If anyone could mention/describe/point to a SINGLE ALGORITHM ANYWHERE
> that depth-sorts correctly, I would be indebted forever.  Would one of
> the high priests reading this group care to reveal this secret to the
> masses?
Well, I'm not a graphics Guru of any version but I really don't think there
is any way to have a depth sort algorithm work in all cases.  The depth
sort always fails whenever you have intersecting polygons or polygons that
overlay sections of each other.  I think the only way to work around this
problem short of ray-tracing or some other high cpu cost procedure is the
use of the Z-buffer algorithm (The painters is a simplified version of this
algorithm) when you generally depth sort each pixel individually instead of
each polygon.  This is all that I have ever seen to eliminate this problem
short of highly-intensive algorithms.  Only problem of course with the
Z buffer algorithm is that you need a fairly large amount of memory
to store the necessary information.  Although if you have the memory, there
would be very few ways of doing this any faster with a different algorithm.

Anyone else out there want to take a shot at a better solution.

-- 
Jeffrey P. Bakke                      |   There are a finite number of
  INTERNET:   bakke@plains.NoDak.edu  |   jokes in the world...         
  UUCP    : ...!uunet!plains!bakke    |     The overflow began 
  BITNET  : bakke@plains.bitnet       |   decades ago. 
"I am not a number, I am a free man!" - The Prisoner

td@alice.att.com (Tom Duff) (12/05/90)

Checking two polygons for overlap is fairly easy:
either some edge of one polygon crosses an edge of the other,
or one of the polygons is completely contained within the other.

You can check for containment by picking a single vertex of each
polygon and checking it for containment in the other.  (You
know how to do this, right?  If not, check the FAQs.)  If these
tests fail, you can just test each pair of edges for intersection.
The obvious method of doing this takes time proportional to n^2.
Since two polygons can intersect in O(n^2) points, you probably
can't do better than this, worst case.

Of course, as has been already pointed out, a bounding box cull
will usually save much work.

The previous posting that suggested using a general polygon clipper
(like Weiler-Atherton) to test for overlap was overkill.  It's
Very Hard to get Weiler-Atherton to work properly; it's dead easy
to check polygons for overlap.

atc@cs.utexas.edu (Alvin T. Campbell III) (12/06/90)

In article 15436 of comp.graphics, (Jeffrey P. Bakke) writes:

>In article <18082@netcom.UUCP> rodent@netcom.UUCP (Richard Noah) writes:
>> 
>> If anyone could mention/describe/point to a SINGLE ALGORITHM ANYWHERE
>> that depth-sorts correctly, I would be indebted forever. 
>
>                                    ... I really don't think there 
>is any way to have a depth sort algorithm work in all cases.  The depth
>sort always fails whenever you have intersecting polygons or polygons that
>overlay sections of each other.  I think the only way to work around this
>problem ... is the use of the Z-buffer algorithm 
>
>Anyone else out there want to take a shot at a better solution.

I would suggest the use of a Binary Space Partioning (BSP) Tree algorithm.
The basic technique leads to a depth sort without the limitations of the 
Newell-Newell-Sancha method, mentioned above.  

First, the input polygons are split across each other's planes, in order to 
prevent the intersection and cycled-ordering problems mentioned above.  This 
is done in an efficient manner which does not require each polygon to be 
tested against all others.

Next, a binary data structure is created which aids in the polygon
sorting process.  A modified inorder tree traversal results in a correct
depth sort of the polygons from any viewpoint.  Note that this data
structure is viewpoint-independent, and is well-suited for hidden
surface removal in animations where the only the viewpoint changes.

The basic work was done by Bruce Naylor in his dissertation research, 
and is documented in the somewhat theoretical paper:
Fuchs, Kedem, and Naylor, "On Visible Surface Generation by A Priori
Tree Structures", Proceedings of SIGGRAPH 1980, July 1980, pp. 124-133.

A later paper with a more practical orientation is:
Fuchs, Abram, and Grant, "Near Real-Time Shaded Display of Rigid 
Objects", Proceedings of SIGGRAPH 1983, JUly 1983, pp. 65-72.

I hope this information is helpful.

-- 
				A. T. Campbell, III
				CS Department, University of Texas
				atc@cs.utexas.edu

uselton@nas.nasa.gov (Samuel P. Uselton) (12/06/90)

posted from rodent@netcom.UUCP (Richard Noah), but signed by Ben Discoe:
>hugo@gem.stack.urc.tue.nl (Hugo Lyppens) writes:
>[implementing newell-newell-sancha on an Amiga]
>>the screen overlap or not. I haven't been able to find an article on this
>>subject, not even in articles from the authors of the painter's algorithm
>>themselves!
>
As pointed out by others (and my email to Lyppens) this is really the
same as clipping - found in lots of books.

>I did this a while ago, using Foley-VanDam.  The 2D-overlap algo is either
>in the same book, or another common reference, because I didn't have
>trouble finding it.  The only problem is, this algorithm DOESN'T WORK
                                           ^^^^^^^^^^^^^^
Newell-Newell-Sancha or the overlap (clipping) ??
(I assume N-N-S Painter's visible surface alg.)

>CONSISTENTLY.  Even if you limit yourself to scenes composed of the simplest
>non-intersecting triangles, newell-newell-sancha algo will occasionally
>be unable to depth-sort them correctly.

There is a standard picture in most texts showing a set of three convex
polygons that overlap in x y and z without intersecting.  They can not be
"properly" depth sorted BY ANY METHOD! (So don't blame N-N-S or the text 
authors.)  The standard solution in this case is to split one of the polygons
along the plane of another.  This results in four convex polygons that ARE
readily depth "sortable".

The main benefit of this algorithm is not its speed for a single frame but
rather the fact that in most SEQUENCES the order of the polygons changes
very little from one frame to the next, so frames 2 through N can be done
faster.  This speed is useful in real (or near real) time work, but requires
fairly clever implementation to take full advantage.

>
>Is there an algorithm that always works?  If there is, it's a closely
          ^^^^^^^^^^^^  "Hidden Surface Removal" I assume

>gaurded secret of the high priests of computer graphics.  I've tried for
>years to find an algorithm that will work in all cases, spending months
>coding, testing, and ending up in defeat.

There is no ONE visible surface rendering algorithm best for all purposes.
That is why there are so many choices.  And why people ("high priests"? :-) )
continue to do research and publish new algorithms and improvements to the
old ones.  I'm only a low priest I guess (  :-)  ) but in every graphics
class I ever taught (in 10 years) we spent at least a couple of weeks 
discussing the hidden/visible surface problem, why there were choices to be
made, and what the question were to help determine which answer is appropriate.
Hardly a "closely guarded secret."
>
>I went to Computer Literacy (for those unknowing, it's a VERY complete
>bookstore) and looked in EVERY computer graphics text they had.
>Many didn't even mention hidden-line algos, some mentioned them but didn't
>elaborate, and several only had elaborate rendering techniques useless
>for real-time.

It is a HARD problem.  Saying "useless for real-time" and discounting 
the entire content is throwing the baby out with the bathwater.  It also
suggests that your hardware is not (yet) up to the task you are attempting. 
>
>Only two books (everybody knows which) had a depth-sort algorithm in
>detail, and they both described the (oh no!) Newell-Newell-Sancha algo.
>
>If anyone could mention/describe/point to a SINGLE ALGORITHM ANYWHERE
>that depth-sorts correctly, I would be indebted forever.  Would one of
^^^^^^^^^^^^^^^^^^^^^^^^^^^ as pointed out above,
this is mathematically impossible in general.

>the high priests reading this group care to reveal this secret to the
>masses?
>
Another possibility you might consider  (before buying a Personal Iris
and making the problem moot) is the BSP-tree based algorithm.  It requires
building a tree data structure rather than a sorted list.  Traversing the
tree at rendering time is almost as fast.  The tree does not need changed
as a viewer moves around, but will need revised when objects move relative to
each other.  [Foley, VanDam, Feiner & Hughes starting p. 675]

>---------------------------------
>Ben Discoe, caltech escapee and frustrated visionary at large. 

Sam Uselton		uselton@nas.nasa.gov	ex-prof, now just research!
employed by CSC		working for NASA	speaking for myself

toddh@megatek.UUCP (Todd Heckel) (12/06/90)

Actually, the Z-buffer is a very brute force technique, one might even say "stupid".
It get's the job done, but it wastes a lot of time dealing with every pixel from
every polygon in the scene, whether or not it will be ultimately visible.  It is
also is not compatible with any kind of real-time anti-aliasing technique.  The
solution, of course, involves much more complex algorithms which perform various
types of culling operations in order to limit the number of pixels actually rendered.

						Todd Heckel, net neophyte.

td@alice.att.com (Tom Duff) (12/06/90)

rodent@netcom.UUCP says:
>Even if you limit yourself to scenes composed of the simplest
>non-intersecting triangles, newell-newell-sancha algo will occasionally
>be unable to depth-sort them correctly.

>Is there an algorithm that always works?  If there is, it's a closely
>gaurded secret of the high priests of computer graphics.  I've tried for
>years to find an algorithm that will work in all cases, spending months
>coding, testing, and ending up in defeat.

I find this hard to believe.  Foley, van Dam, Feiner & Hughes describe
the Newell, Newell & Sancha algorithm on pp 673-675 of Computer Graphics,
Principles & Practice.  The algorithm is fairly simple.
I've coded it up at least once, long ago when I gave it as a programming
assignment.  It took about a day, and
never caused any trouble.  I'm utterly astounded that anyone could spend
large amounts of time and still fail to get it working.

It's fairly obvious that the algorithm cannot fail to depth-sort a scene,
since whenever the depth order of two polygons cannot be determined, it
replaces one of them by splitting it in two pieces, one on each side of the
other (original) polygon.  Since the algorithm refuses to allow polygons
whose depth order cannot be determined to remain in the scene, the only
possible failure mode would be non-termination, not incorrect depth-sorting.
Non-termination is obviously out of the question, since each polygon can
be split at most by each other (original) polygon in the scene.

Maybe you could describe in more detail a simple scene on which your program
fails?

gessel@masada.cs.swarthmore.edu (Daniel Mark Gessel) (12/07/90)

In article <848@portnoy.megatek.uucp> toddh@megatek.UUCP (Todd Heckel) writes:


>Actually, the Z-buffer is a very brute force technique, one might even say "stupid".
>It get's the job done, but it wastes a lot of time dealing with every pixel from
>every polygon in the scene, whether or not it will be ultimately visible.  It is
>also is not compatible with any kind of real-time anti-aliasing technique.  The
>solution, of course, involves much more complex algorithms which perform various
>types of culling operations in order to limit the number of pixels actually rendered.

						   Todd Heckel, net neophyte.

Actually, anti-aliasing a zbuffer is easy. But the technique I would use is as
brute force and stupid as a z-buffer itself. Render the image into a buffer
with twice the resolution in each direction (or more) and have video hardware
that averages from 4 pixels (or more). This is probably the most
general anti-aliasing technique, and I'd like to see it in things like display
postscript and stuff. But it does use 4 (or more) times as much memory and
processing power.

Dan
--
Daniel Mark Gessel                          Independent Software Consultant
Internet: gessel@cs.swarthmore.edu                   and Developer
I do not represent Swarthmore College (thank God).

jcs@crash.cts.com (John Schultz) (12/07/90)

In <18082@netcom.UUCP> rodent@netcom.UUCP (Richard Noah) writes:

[stuff deleted]
>Is there an algorithm that always works?  If there is, it's a closely
>gaurded secret of the high priests of computer graphics.  I've tried for
>years to find an algorithm that will work in all cases, spending months
>coding, testing, and ending up in defeat.

  The BSP (binary space partition algorithm) tree will work for all cases.
The only disadvantage is the need to split polygons with intersecting planes.
These splits are done *before* runtime (for real time work), so this algorithm
is very good when you can crank out polygons at high speed.
  You can also do splits for the Newell et all algorithm *during* runtime,
and thus is will work for all cases (but slower).

>If anyone could mention/describe/point to a SINGLE ALGORITHM ANYWHERE
>that depth-sorts correctly, I would be indebted forever.  Would one of
>the high priests reading this group care to reveal this secret to the
>masses?

  See "Computer Graphics, Principles and Practice", by Foley et all.
It presents a lucid explanation of BSPs and the depth sort method with
splits.


  John

jcs@crash.cts.com (John Schultz) (12/07/90)

In <11695@alice.att.com> td@alice.att.com (Tom Duff) writes:

[stuff deleted]

>The previous posting that suggested using a general polygon clipper
>(like Weiler-Atherton) to test for overlap was overkill.  It's
>Very Hard to get Weiler-Atherton to work properly; it's dead easy
>to check polygons for overlap.

  The Sutherland-Hodgman algorithm will clip any polygon (convex or
concave) to any convex polygon. It's really simple to code, and can
also be used as the primary 2D or 3D screen clipper. But, doing a
pointwise compare (using cross products or winding points algorithms),
of points in a polygon will probably be faster. But, if you're out for
speed, skip this step, or if you have a fast polygon machine, use a BSP
(or if objects are moving, use BSPs for each object, then sort the objects).


  John

bakke@plains.NoDak.edu (Jeff Bakke) (12/07/90)

In article <848@portnoy.megatek.uucp> toddh@megatek.UUCP (Todd Heckel) writes:
> Actually, the Z-buffer is a very brute force technique, one might even say "stupid".
> It get's the job done, but it wastes a lot of time dealing with every pixel from
> every polygon in the scene, whether or not it will be ultimately visible.  It is
> also is not compatible with any kind of real-time anti-aliasing technique.  The
> solution, of course, involves much more complex algorithms which perform various
> types of culling operations in order to limit the number of pixels actually rendered.

Correct me if I'm wrong but from what I understand the Z-Buffer method,
is the method most implemented in High performance hardware graphics
implementation.  On the hardware level it would seem to be very efficient
and easy to implement.  I agree that on a software level, it would be 
stupid to fully implement a z-buffer method of hl removal but I made the
assumption that if you were to do a software method, you would make
adjustments and changes to the generalized algorithm to decrease unnecesary
calculation and overlap.

-- 
Jeffrey P. Bakke                      |   There are a finite number of
  INTERNET:   bakke@plains.NoDak.edu  |   jokes in the world...         
  UUCP    : ...!uunet!plains!bakke    |     The overflow began 
  BITNET  : bakke@plains.bitnet       |   decades ago. 
"I am not a number, I am a free man!" - The Prisoner

uad1077@dircon.uucp (Ian Kemmish) (12/08/90)

<Discussion of overlapping polygons in the depth sort. >

There are various ways to do this.  The A-buffer fudges it, on the
basis that in most cases the error introdiced by fudging isv.
small.  (And I've known plenty of commercial work done this way
that seems to bear this out.)  The Warnock subdivision algorithm
would certainly handle it OK, until you got to the sub-pixel level,
when you could again fudge it.  I beleive the Weiler-Atherton algorithm
handles it if done properly.  Catmull's analytic algorithm (SIGGRAPH 84)
certainly does it properly.

I am surprised at the claim that no textbooks could be found that
mentioned these algorithms.

Also, note that `doing it properly' is not necessarily expensive.  Most
of these algorithms were devised when the VAX was new.  These days,
a flop is far cheaper than a page fault -- I am in the middle of
moving from a super-sampled jittered z-buffer to an analytic
visibility algorithm with direct-convolution filtering for just
that reason (plus a natural fear of patent lawyers!).

-- 
Ian D. Kemmish                    Tel. +44 767 601 361
18 Durham Close                   uad1077@dircon.UUCP
Biggleswade                       ukc!dircon!uad1077
Beds SG18 8HZ United Kingd    uad1077%dircon@ukc.ac.uk

toddh@megatek.UUCP (Todd Heckel) (12/09/90)

From article <GESSEL.90Dec6110219@masada.cs.swarthmore.edu>, by gessel@masada.cs.swarthmore.edu (Daniel Mark Gessel):
>
> Actually, anti-aliasing a zbuffer is easy. But the technique I would use is as
> brute force and stupid as a z-buffer itself. Render the image into a buffer
> with twice the resolution in each direction (or more) and have video hardware
> that averages from 4 pixels (or more). This is probably the most
> general anti-aliasing technique, and I'd like to see it in things like display
> postscript and stuff. But it does use 4 (or more) times as much memory and
> processing power.
> 
> Dan
> --
> Daniel Mark Gessel                          Independent Software Consultant
> Internet: gessel@cs.swarthmore.edu                   and Developer
> I do not represent Swarthmore College (thank God).

	Note that I said "real-time".  I have designed systems that do exactly
	as you described, at either four or sixteen times the display resolution
	(two by two or four by four).  While the images look great, they take
	four or (gasp) sixteen times as long to generate as an un-anti-aliased
	image.  Newer techniques based on sorting and bit-masks, like the
	A-buffer, are currently being adapted for real-time image generation
	use.
-- 
	Todd Heckel
	Megatek Corporation	uunet!megatek!toddh	
	9645 Scranton Road		or
	San Diego, CA  92121	toddh@megatek.uucp

toddh@megatek.UUCP (Todd Heckel) (12/09/90)

From article <7039@plains.NoDak.edu>, by bakke@plains.NoDak.edu (Jeff Bakke):
> 
> Correct me if I'm wrong but from what I understand the Z-Buffer method,
> is the method most implemented in High performance hardware graphics
> implementation.  On the hardware level it would seem to be very efficient
> and easy to implement.  I agree that on a software level, it would be 
> stupid to fully implement a z-buffer method of hl removal but I made the
> assumption that if you were to do a software method, you would make
> adjustments and changes to the generalized algorithm to decrease unnecesary
> calculation and overlap.
> 
	I'm sorry, I didn't mean to imply that it would be stupid to use
	the algorithm in a high-performance system.  And while it is very
	easy to implement, it is NOT efficient.  Just the depth storage
	for a floating-point implementation requires well over 4 megabytes
	of high-speed RAM!  Also, like I said originally, a lot of processing
	may end up being wasted on pixels that end up being occluded anyway.
	You're correct when you state that the Z-Buffer method is the most
	implemented in high-performance graphics ___Workstations___.
	However, it is very uncommon to see Z-Buffers used in high-performance
	___Image Generators___, like those used for Flight Simulators and such.
-- 
	Todd Heckel
	Megatek Corporation	uunet!megatek!toddh	
	9645 Scranton Road		or
	San Diego, CA  92121	toddh@megatek.uucp

coy@ssc-vax (Stephen B Coy) (12/13/90)

In article <850@portnoy.megatek.uucp> toddh@megatek.UUCP (Todd Heckel) writes:
>	You're correct when you state that the Z-Buffer method is the most
>	implemented in high-performance graphics ___Workstations___.
>	However, it is very uncommon to see Z-Buffers used in high-performance
>	___Image Generators___, like those used for Flight Simulators and such.
>-- 
>	Todd Heckel
>	Megatek Corporation	uunet!megatek!toddh	
>	9645 Scranton Road		or
>	San Diego, CA  92121	toddh@megatek.uucp

Uncommon, yes, but not unheard of.  A few years ago Boeing Aerospace
delivered a very nice system for image generation for a B-1B flight
simulator.  The system was Z-buffer based with some enhancements
along the lines of the A-buffer for anti-aliasing.  Boeing also lost
a few tens of millions on the contract.  Similar technology was used
for this summer's delivery of an image generation system to the Army
for mission planning.  One of the big selling points with a Z-buffer
based system vs a BSP based system is the faster turn-around time
for developing databases.  The nature of the Z-buffer algorithm also
lends itself to heavy pipelining/parallelization.

Stephen Coy
uw-beaver!ssc-vax!coy

			A house divided is a duplex.

davidle@microsoft.UUCP (David LEVINE) (12/18/90)

In article <11697@alice.att.com> td@alice.att.com (Tom Duff) writes:
>
>>Is there an algorithm that always works?  If there is, it's a closely
>>gaurded secret of the high priests of computer graphics.  I've tried for
>>years to find an algorithm that will work in all cases, spending months
>>coding, testing, and ending up in defeat.
>
>I find this hard to believe.  Foley, van Dam, Feiner & Hughes describe
>the Newell, Newell & Sancha algorithm on pp 673-675 of Computer Graphics,
>Principles & Practice.  The algorithm is fairly simple.
>I've coded it up at least once, long ago when I gave it as a programming
>assignment.  It took about a day, and
>never caused any trouble.  I'm utterly astounded that anyone could spend
>large amounts of time and still fail to get it working.
>
>It's fairly obvious that the algorithm cannot fail to depth-sort a scene,
>since whenever the depth order of two polygons cannot be determined, it
>replaces one of them by splitting it in two pieces, one on each side of the
>other (original) polygon.  Since the algorithm refuses to allow polygons
>whose depth order cannot be determined to remain in the scene, the only
>possible failure mode would be non-termination, not incorrect depth-sorting.
>Non-termination is obviously out of the question, since each polygon can
>be split at most by each other (original) polygon in the scene.
>
>Maybe you could describe in more detail a simple scene on which your program
>fails?

I will admit that I have not followed this thread fully and am not up
on the very latest rendering techniques.  However, you seem to be assuming
that the computation of the relationship between pairs of polygons will
always be accurate.

Have you tried this algorithm on real world databases with many thousands of
polygons?  There is no fast, perfect hidden surface or hidden line
algorithm I know of.  Ray tracing is very good and insensitive to local
singularities which tend to be confined to one or two bad pixels.  Other
techniques will always fail on some databases in my experience.

Singularities crop up in complex scenes.  Accuracy is a more important
consideration than speed in finished engineering work.  For example, in
my experience, the multi-million dollar (yes, I know that means nothing)
CSG software we used could not generate a perfect hidden line view of our
model which contained over 100,000 polygons with much fine detail and lots
of trusses containing many points where several cones meet, each one involving
dozens of polygons with a common point.  Remember, this model is generated
with floating point coordinates such that even if the final work is done
in integer space, the result may be many polygons intersecting in arbitrary
ways.  Touch up work with white-out and a straight edge was very common
when finished drawings were being produced.

Other experience has show me that there is no perfect Computer Graphics
Wizard solution to this type of problem.  The "art" is to find algorithms
that are fast and that fail rarely and that, in failure, produce only
local errors as opposed to making large parts of the result incorrect.

Hidden line views are particularly troublesome in this respect because
often an entire line or a large portion of one is missing which is often
much more noticable then some pixels being the wrong color.  Hidden line
rendering is required for driving plotters, among other uses. 

Your logic on why one particular algorithm is perfect does not convince me.
It assumes that splitting polygons around others and other computations
always result in a correct representation.  In practice, I suspect that
in testing this approach, you would refine it until you think that it is
indeed perfect and then a customer will send you a model that fails in
one particular orientation and view in a way that your program did not
anticipate.  You must then change your algorithm, probably slowing it down
just a bit more and most likely destabalizing the code, fixing more failure
modes than are added or maybe not.

This is usually the development pattern of high end graphics packages that
must be robust.  We are learning much as the years go by and much of the
problems which are encountered may be found in the literature.  Still,
in the final result, rendering code that must be accurate seems to wind up
with a lot of magic in it and I think that is the experience reported by
the poster, by myself and by everybody else I have encountered in my
relatively limited exposure.

David Levine (davidle@microsoft)

jcs@crash.cts.com (John Schultz) (12/19/90)

In <59890@microsoft.UUCP> davidle@microsoft.UUCP (David LEVINE) writes:

>In article <11697@alice.att.com> td@alice.att.com (Tom Duff) writes:
>>
>>It's fairly obvious that the algorithm cannot fail to depth-sort a scene,
>>since whenever the depth order of two polygons cannot be determined, it
>>replaces one of them by splitting it in two pieces, one on each side of the
>>other (original) polygon.  Since the algorithm refuses to allow polygons
>>whose depth order cannot be determined to remain in the scene, the only
>>possible failure mode would be non-termination, not incorrect depth-sorting.
>>Non-termination is obviously out of the question, since each polygon can
>>be split at most by each other (original) polygon in the scene.
>>
>>Maybe you could describe in more detail a simple scene on which your program
>>fails?

>I will admit that I have not followed this thread fully and am not up
>on the very latest rendering techniques.  However, you seem to be assuming
>that the computation of the relationship between pairs of polygons will
>always be accurate.

  He is correct, mathematically.

>Have you tried this algorithm on real world databases with many thousands of
>polygons?  There is no fast, perfect hidden surface or hidden line
>algorithm I know of.  Ray tracing is very good and insensitive to local
>singularities which tend to be confined to one or two bad pixels.  Other
>techniques will always fail on some databases in my experience.

  As has been previously stated, the Binary Space Partition and Depth Sort
with splits, will work for *any* scene from *any* angle, regardless of
the number of polygons. Study these algorithms to become enlightened.
Mathematically, these schemes are robust, and could probably even be written
as a proof (Tom?). Errors might occur due to loss of precision from
excessive plane divisions, but errors will not occur due to an error in the
algorithm, there are no "undefined" cases; it is completely general.

>Other experience has show me that there is no perfect Computer Graphics
>Wizard solution to this type of problem.  The "art" is to find algorithms
>that are fast and that fail rarely and that, in failure, produce only
>local errors as opposed to making large parts of the result incorrect.

  Mathematicians have found the BSP and Depth Sort with Splits to be
valid. Sometimes math (directed) over "art" (pure trial and error) gives
best results.

>Your logic on why one particular algorithm is perfect does not convince me.
>It assumes that splitting polygons around others and other computations
>always result in a correct representation.  In practice, I suspect that
>in testing this approach, you would refine it until you think that it is
>indeed perfect and then a customer will send you a model that fails in
>one particular orientation and view in a way that your program did not
>anticipate.  You must then change your algorithm, probably slowing it down
>just a bit more and most likely destabalizing the code, fixing more failure
>modes than are added or maybe not.

  Generalized solutions cannot fail, else they aren't generalized. The BSP
is so simple (and fast) that you can readily see that it cannot fail to
produce the correct representation. The BSP is as accurate as the formula
to split two 3D planes. Check it out, you'll be pleasantly surprised
(See "Computer Graphics, Principles and Practice", Addison Wesley).
For doing engineering views of static scenes with variable viewing angles,
the BSP is ideal, and probably the fastest method.
  One doesn't need to be a "Wizard", just persistent and open minded.



  John