[comp.parallel] Parallel machines by 1995

djc@mbunix.mitre.org (Cazier) (02/10/90)

What is the future of parallel machines in terms of replacing mainframes
handling large user communities?  The marketing ploy today seems the
same as yesterday in terms of number of users (UNIX world). If I ask
a vendor about the possibility of using their parallel system to
handle 350 users and support 60 Gb of disk farm, they choke.

The parallel systems seem to be geared toward compute intensive applications
rather than interactive use of lots of users. If that's so, then how in the
world is UNIX, for example, going to grab the fancy of MIS interests?

My scalar mind says that the modularity of parallel systems makes them a
good choice for companies that need growth flexibility.

One other "problem" that I've noticed with some parallel systems is that
the CPU's don't load balance jobs. What a waste of CPU resources! or
am I being to harsh?
--
Jacques Cazier (713)-333-0966
{decvax,philabs}!linus!mbunix!jak or jak@mbunix.mitre.org

eugene@eos.arc.nasa.gov (Eugene Miya) (02/12/90)

>What is the future of parallel machines in terms of replacing mainframes
>handling large user communities?

Mainframes: Parallel COBOL.  What a concept!  See below.

>The marketing ploy today seems the
>same as yesterday in terms of number of users (UNIX world). If I ask
>a vendor about the possibility of using their parallel system to
>handle 350 users and support 60 Gb of disk farm, they choke.

You have several issues here: architecture, storage, software. Do not
confuse them.  Our all-Unix installation (other non-Unix at Ames),
has 256GB front end mass storage (disk) and 2TB (1 for development)
back end.  I think LLNL has 5TB (non-Unix), other sites have more (black).

>The parallel systems seem to be geared toward compute intensive applications
>rather than interactive use of lots of users. If that's so, then how in the
>world is UNIX, for example, going to grab the fancy of MIS interests?

MIS is largely O(n) is storage resource and finite time of higher O()
algorithmic time.  Parallel systems are largely driven by "scientific"
computing requirements which are >O(n^p) in many cases in both time
and space.  Some problems use Cray-years of CPU or use special processors.
Parallel system are interesting things to study in their own right.

>My scalar mind says that the modularity of parallel systems makes them a
>good choice for companies that need growth flexibility.

Use care.  This is a partial fallacy.

>One other "problem" that I've noticed with some parallel systems is that
>the CPU's don't load balance jobs. What a waste of CPU resources! or
>am I being to harsh?

Note the fusion analogy I mailed Steve, your moderator.  Worry about
efficiency later.  I.e, in some ways you are being harsh.  Don't
expect efficient automatic parallelism overnight.  Get the thing running first.
That is for some, surprising hard.

I've been collecting predictions when 1 Tera FLOPS will be commerically
available: they run from 1993 to never with 2025 the last.  The distribution
is interesting, I won't say the mean, that would bias the sample.
But the people who work for companies who build very fast machines
give much more conservative estimates that those who don't.  I will say
the majority are between 1995 and 2005 (the mean not being 2000).
Everyone says that this is only achieveable in parallel, but how
no one knows.

Another gross generalization from

--eugene miya, NASA Ames Research Center, eugene@aurora.arc.nasa.gov
  resident cynic at the Rock of Ages Home for Retired Hackers:
  "You trust the `reply' command with all those different mailers out there?"
  "If my mail does not reach you, please accept my apology."
  {ncar,decwrl,hplabs,uunet}!ames!eugene
  Do you expect anything BUT generalizations on the net?