[net.graphics] Orphaned

sher@rochester.UUCP (David Sher) (07/31/85)

In article <6700022@datacube.UUCP> shep@datacube.UUCP writes:
>
>rochester!sher wrote:
 ...
>
>	I argue that the interconnection network between processors -is-
>the critical link in some machine vision architectures. A good article,
>"Computer Architectures for Pictorial Information Systems" appeared in
>November 1981 IEEE Computer. In that article, the dimensions of parallelism
>in image processing was explored. Simply put, parallelism in image
>processing may be broken down into:
>	- Operator Parallelism
>	- Image Parallelism
>	- Neighborhood Parallelism
>	- Pixel-bit Parallelism
>
>	I am not familiar with the two architectures (CMU WARP, BBN Butterfly)
>mentioned. But your results suggest that these architectures have most
>of any parallelism along the "image" axis. If this is in fact the case, I
>would agree totally with your findings.
>
>	Out along the other dimensions of parallelism, the interconnection
>issue becomes critical. An operator parallel intensive architecture requires
>small amounts of local processor storage, but has a high input and output
>bandwidth requirement due to its pipelined nature. My personal design
>bias has long favored operator parallel techniques.
>(shep == "Should Have Everything Pipelined")
>
>	Since there are so many different ways of addressing the "low-level"
>image processing tasks that underlie the scene segmentation issues, it
>would be foolish lock into a particular "religion" for these tasks. Instead,
>I feel an open approach must be taken while we explore different architectures,
>and evaluate their performance.
>
>Shep Siegel                               ihnp4!datacube!shep
>Datacube Inc.                  ima!inmet!mirror!datacube!shep
>617-535-6644                  decvax!cca!mirror!datacube!shep
>4 Dearborn Rd.       decvax!genrad!wjh12!mirror!datacube!shep
>Peabody, Ma. 01960   {mit-eddie,cyb0vax}!mirror!datacube!shep

It turns out that WARP happens to be a highly pipelined architecture.  
But there is so much parallelism in most low-level vision tasks that no matter
what kind of parallelism your architecture supports you can find it in the
task.  This is why the interconnection scheme is not particularly significant.

Also operator parallelism (pipelining) is extremely nice.  The only form
of parallelism that is easier to deal with is probably what you call 
image parallelism.  This is chopping up the image onto a set of sub-images
and running independent programs on them perhaps with periodic synchronization
and clean up around the borders.  This reduces the problem of programming
parallel machines to that of programming a single processor.  What I found
is that this parallelism can be found in almost any low level vision task
and that any architecture with a half decent amount of memory can support
it.  

-David Sher
sher@rochester
seismo!rochester!sher