stein@oscsuna.osc.edu (Rick 'Transputer' Stein) (05/06/89)
I have prepared the following thoughts on an idea I have called a concurrent visualization system (CVS). I would like to have commentary and opinions on this machine. So here goes! The CVS is a linear scalable computation and rendering engine. Constructed from Inmos transputers, the machine gains its name from the following idea: The CVS is a multipipelined architecture. Each pipeline contains a framebuffer big enough to handle a 1Kx1K screen w/8bit color pixel. If I have some static polygonal database organized as a tessellation with BSP-trees, octrees, or some other spatially presorted entity at each instance of the tessellation, I can determine what instances of the tessellation are within the viewing frustrum based on a simple eyepoint calculation. When I perform the database intersection, I know which instances to draw and in what order (just like the big-boys do in the flight simulators). So, if I have one pipeline which operates on the database (like your favorite workstation), I get some nominal level of performance. Now if I happen to objectively decompose the database onto, say N pipelines, I can gain N times the performance since each pipeline only has to transform 1/N of the total visible polygons for a particular scence (assuming some kind of load balance across the pipelines). Ok. At the end of each pipeline sits these framebuffers. What I'd like to do is "OR" together the collective outputs of the framebuffers at the video controller, in this case I'd like to use an Inmos G300 CVC (color video controller) with its DMA strobes used to clock out the pixels stashed in the DPVRAMs. That is, each pipeline can _randomly_ write into the framebuffer, and on the whole, only a little overlap of pixels should occur. Like looking a set of three pyramids, where the one on the left and right partially obscures the one in the middle. * * / \ / \ / \ * / \/ \/ \ / 1 \ 3 / 2 \ [# indicate pipelines or framebuffers that / \ / \ process the individual polygons]. ---------- --------- Like all flight simulators, one typically must draw more information than is need to describe a scene because of overlap in the geometry (and other reasons). I planned to have a extra bank of DPRAM as a pixel-enable buffer (PEB). The PEB is a 1kbit DPRAM, where each bit represents a pixel in the framebuffer of each pipe, and it is only set when a pixel in a particular framebuffer is written into. These PEB would be used by the G300 to selectively DMA out the pixels which are active. So I avoid scanning every framebuffer from top to bottom (at 30Hz, with 16 FBs, this could be a problem :-)). If I know in precisely what order I should set the pixel-enables for drawing, meaning that I use a painter's algorithm and go back to front and left to right, so that the occlusion problem wont bite me, wouldn't this system seem to work? My eyepoint is dynamic [I'm hooked up to a joystick or something]. I must create the transformations which will rotate all the tessellation instances according to the perspective I've mapped. Since I've got a tessellation, I know precisely the relationship of each instance to the others. That is, with a fixed size instance, I know that, for example, instance A is at 1,1 in the tessellation and Instance B is at 4,6 so I can shift a certain number of bits (or multiply by (4-1) in the x and (6-1) in the y) and find the other instance. This means that when I decompose my scene, the only replicated pieces of data at the nodes of the multicomputer, is the tessellation structure, not the BSP or octrees. Comments, suggestions, but no flames, are welcome. Thanks. -- Richard M. Stein (aka Rick 'Transputer' Stein) Office of Research Computing @ The Ohio Supercomputer Center Ghettoblaster vacuum cleaner architect and Trollius semi-guru Internet: stein@pixelpump.osc.edu