[comp.arch] Architectural analysis of RPM-40 fo

kelly@uxe.cso.uiuc.edu (03/28/88)

/* Written  7:41 am  Mar 23, 1988 by davidsen@steinmetz.steinmetz.ge.com in uxe.cso.uiuc.edu:comp.arch */
I have heard from a lot of people who say that they regularly run
programs which take days of CPU to complete. What surprises me is that
these were commercial installations in some cases. I guess if salaries
are low enough it's cheaper to wait for results than to buy a little
time on a <favorite big computer>.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {uunet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me
/* End of text from uxe.cso.uiuc.edu:comp.arch */

A lot depends on the circumstances and the type of job you want to run.
we have a SUN 3/280 available for most of our computing tasks, and
we also have access to a Cray x/mp at our local supercomputing center.
We have found that the cray is only 15-30 times faster than the SUN
for jobs involving a lot of IO and that it is 60 to 100 times faster
for jobs that are fairly well vectorized.  I don't think I have ever
signed on to that cray and found less than 30 jobs in it's run queue,
and it typically has about 120 jobs listed as being idle, waiting for
memory etc.
	Under these circumstances, it doesn't make sense to run jobs
on the cray unless they can take good advantage of its vector hardware
or unless the jobs need to create large temporary files (>100Mbytes).
Other jobs typically take about the same length of time to run on both
machines.  In this case the SUN actually works out better, because we
have direct control over when it is up and running.
	I would imagine that these observations are true for other
mainframes and supercomputers.  The machines are so expensive, you have
to keep so many users and jobs running on them that actual turn around
time for typical jobs is about the same as on a fast workstation.