[comp.parallel] One parallel v. many uniprocessor algorithms

hjm@uunet.UU.NET (Hubert Matthews) (05/03/89)

One of the things that I didn't see mentioned (perhaps someone said it
and I missed it) regarding the Distributed Simulation discussion about
whether it is "better" to run several simulations in parallel or have
one algorithm spread over several processors is the subject of memory
requirements, or roughly translated: cost.

Consider n processors running n copies of the same program.  The total
memory requirement will be n times the uniprocessor requirement.  Now
consider the case of one program distributed over n processors.  Let's
make a wild guess and say that the total memory in the distributed
version is of the same order of magnitude as the uniprocessor version;
the exact figure is not important.  For the uniprocessor version, we
need n times as much memory to do the same work in the same time, so
we lose out on the cost of the extra memory, and since the cost of
memory is the dominant cost in a computing system today, this does
seem to give you less bang-per-buck.



-- 

	Hubert Matthews

matloff%crow.Berkeley.EDU@ucbvax.Berkeley.EDU (Norman Matloff) (05/03/89)

In article <5374@hubcap.clemson.edu> mcvax!cernvax!hjm@uunet.UU.NET (Hubert Matthews) writes:
>One of the things that I didn't see mentioned (perhaps someone said it
>and I missed it) regarding the Distributed Simulation discussion about
>whether it is "better" to run several simulations in parallel or have
>one algorithm spread over several processors is the subject of memory
>requirements, or roughly translated: cost.

Several people mentioned it.  I made the claim that "most" queuing
applications don't have large memory requirements, so that it's not 
an issue in these cases.  Typically the largest component of memory 
usage is the event list.  The latter typically has length approximately 
equal to the number of servers, which in "typical" applications is on the
order of 10's or at most 100's  --  thus not a large memory requirement
at all.

   Norm

braner@tcgould.tn.cornell.edu (Moshe Braner) (05/08/89)

Most distributed-memory parallel machines seem to come with a lot of RAM
(1M to 4M per processor).  Once you have paid for that, it is frequently
more efficient (in terms of heavier utilization of the available hardware)
to run several separate simulations, one on each node, than to run a
distributed one.  And of course it's easier to program...  Anyway, let's
not generalize.  Some people's programs do need a lot of RAM, and will
not run at all on typical non-parallel machines...

- Moshe