[sci.virtual-worlds] Pixel-Planes corrections

leech@cs.unc.edu (Jonathan Leech) (04/03/91)

In article <1991Apr2.015325.22939@milton.u.washington.edu>, cdshaw@cs.UAlberta.
CA (Chris Shaw) writes:

|> I'd add James Clark (SGI) and Fuchs/Poulton/Eyles/et al (UNC-CH) to that 
|> list.
|> The reason being that these people and their organizations have produced
|> machines that deliver 100,000+ real-time polygons per second, while Cray
|> and Hillis don't.

    Make that 1.6M gouraud-shaded, z-buffered, 1+M phong-shaded, 700K
spheres on Pixel-Planes 5 (at least until more performance tuning is
done :-)

|> However, the challenge is then "how do I get my virtual geometric model into
|> each pixel processor". One answer to that question was designed and build by
|> Henry Fuchs/John Poulton/John Eyles and team over the last 10 years at
|> UNC Chapel Hill. Pixel-Planes is the name of the system, and they're 
|> currently on version 5. The core of their machine is a logic-enhanced frame
|> buffer, which has a tiny little processor attached to each pixel. ...
|>
|> Here there is massive parallelism: For 1024 by 1280 display, there is
|> 1024 * 1280 = 1,310,720 processors. The parallelism is (arguably) at the
|> right place, in the pixel. Of course, the frame buffer has many bits per 
|> pixel (as much as 128 bits in the latest version, I think), and scan-out 
|> circuitry is included in each frame buffer chip. It's build, it works, 
|> they might even demo it at SIGGRAPH this year.

    The previous version of the machine was fully instantiated at
512^2 pixel processors, but that's a lot of idle silicon with
typically sized primitives.  The current version has a variable number
of 128^2 x 208 bits/processor 'renderer' regions which can be adjusted
by software to cover the entire screen, and a variable number of i860
frontend processors for geometric transformations, vertex shading,
etc.  The frame buffer is separate from the renderers.  We plan to be
at SIGGRAPH.

|> I think the misconception here is that high bandwidth is needed to transmit
|> frame buffers from a renderer to a display device. This is not what you want
|> to do. The rendering box must be tightly coupled with the frame buffer,
|> which is why I'm wondering why Crays and CM2's are being suggested as VR
|> engines. Certainly, they would make good simulator boxes, but you should 
|> leave the rendering to the machines that are designed to render quickly, 
|> otherwise the latency will kill you.

    There is a project underway which will link Pixel-Planes and a
Cray Y-MP via a gigabit/s network for interactive applications.  I'm
not involved so can't describe details.

    Re comments about latency: Pixel-Planes is primarily used as a
display-list machine at present, running a PHIGS-derived graphics
library.  This can present latency problems from host <-> i860 <->
renderers <-> frame buffer.  However, people have successfully moved
applications down to run in parallel on the i860s, eliminating the
problem.

    Jon Leech (leech@cs.unc.edu)
    Pixel-Planes project member
    __@/