[comp.sys.sequent] What? I'm confused. Sequent their strengths & weaknesses

eugene@eos.UUCP (Eugene Miya) (09/09/89)

In article <17580@bellcore.bellcore.com> johno@dduck.UUCP (John OBrien) writes:
>A couple of answers here.
>The sequent architecture does automatic multiprocessing
across the available processors.
>The "Parallel Processing" is not done automatically!

Oh?      What's the difference?  [*if you think this terminology is bad,
I can refer you to other vendors with bad terminology...*]

Another gross generalization from

--eugene miya, NASA Ames Research Center, eugene@aurora.arc.nasa.gov
  resident cynic at the Rock of Ages Home for Retired Hackers:
  "You trust the `reply' command with all those different mailers out there?"
  "If my mail does not reach you, please accept my apology."
  {ncar,decwrl,hplabs,uunet}!ames!eugene
  				Live free or die.

rsk@boulder.Colorado.EDU (Rich Kulawiec) (09/09/89)

In article <5053@eos.UUCP> eugene@eos.UUCP (Eugene Miya) writes:
>In article <17580@bellcore.bellcore.com> johno@dduck.UUCP (John OBrien) writes:
>>The sequent architecture does automatic multiprocessing
>across the available processors.
>>The "Parallel Processing" is not done automatically!
>
>Oh?      What's the difference?  [*if you think this terminology is bad,
>I can refer you to other vendors with bad terminology...*]

Here's the deal:

If you and I each launch a dozen processes or so on a ten processor
machine, the kernel scheduler worries about which to run where, and
silently handles getting our 24 jobs done on 10 processors.

However, if I want to run a DSP application in parallel on 4 processors,
then I have to embed the appropriate parallel directives in my code
at compile-time, ie. the compilers won't generate parallel code for me.

---Rsk

csg@pyramid.pyramid.com (Carl S. Gutekunst) (09/09/89)

In article <11500@boulder.Colorado.EDU> rsk@boulder.Colorado.EDU (Rich Kulawiec) writes:
>Here's the deal:
>
>If you and I each launch a dozen processes or so on a ten processor
>machine, the kernel scheduler worries about which to run where, and
>silently handles getting our 24 jobs done on 10 processors.

In other words, the answer to the original question is, "the Sequent works
just like your NCR system." The big difference is the Sequent (and Arete,
Encore, Elxsi, and Pyramid) run symmetric multiprocessors, while NCR (and
CCI, DEC, and a bunch of others) run master/slave. 

The above machines are all general purpose multi-user UNIX boxes that simply
use multiple CPUs as a cost effective way to boost multi-tasking performance.
Generally, if your objective is to do "real" parallel processing, buy a ma-
chine designed for the job: Multiflow, BB&N Butterfly, et al. 

<csg>

rro@bizet.CS.ColoState.Edu (Rod Oldehoeft) (09/09/89)

In article <11500@boulder.Colorado.EDU> rsk@boulder.Colorado.EDU (Rich Kulawiec) writes:
>In article <5053@eos.UUCP> eugene@eos.UUCP (Eugene Miya) writes:
>>In article <17580@bellcore.bellcore.com> johno@dduck.UUCP (John OBrien) writes:
>>>The sequent architecture does automatic multiprocessing
>>across the available processors.
>>>The "Parallel Processing" is not done automatically!
>>
>>Oh?      What's the difference?  [*if you think this terminology is bad,
>>I can refer you to other vendors with bad terminology...*]
>
>Here's the deal:
>
>If you and I each launch a dozen processes or so on a ten processor
>machine, the kernel scheduler worries about which to run where, and
>silently handles getting our 24 jobs done on 10 processors.
>
>However, if I want to run a DSP application in parallel on 4 processors,
>then I have to embed the appropriate parallel directives in my code
>at compile-time, ie. the compilers won't generate parallel code for me.
>
>---Rsk

And, if you and I each launch a parallelized application involving 10
processes on a 16 processor machine, the scheduler does _not_ assure
that either all my processes or all your processes are running.  In
this situation a process can spend an entire time slice waiting busily
for a lock to clear while the process that could clear it is not
occupying a processor.  This leads to poor and erratic performance.

But wait, Sequent provides the capability for tacking a process to a
processor (preempted by only a few system activities), thereby
improving single-application performance (and making it repeatable).
This is called "processor affinity."

At LLNL software was developed in 1986-87 that scheduled "gangs" of processes
together, preempting entire gangs when a new gang appeared or when
a looooong time slice elapsed.  This is a very nice approach if
generalized to schedule more than one gang when possible.  Sequent
spoke of doing this, but I know nothing of the status of this thing.


Rod Oldehoeft                    Email: rro@CS.ColoState.EDU
Computer Science Department      Voice: 303/491-5792
Colorado State University        Fax:   303/491-2293
Fort Collins, CO  80523

johno@dduck.ctt.bellcore.com (John OBrien) (09/11/89)

In article <5053@eos.UUCP> eugene@eos.UUCP (Eugene Miya) writes:

>>The sequent architecture does automatic multiprocessing
>across the available processors.
>>The "Parallel Processing" is not done automatically!
>
>Oh?      What's the difference?  [*if you think this terminology is bad,
>I can refer you to other vendors with bad terminology...*]

A reasonable question.  Here's the definition I've been using to explain
the terms "multi-processing" and "parallel-processing."  They've worked
well in the context of my talks, papers, etc.  Of course, they can
be generalized to death if you like.

"multi-processing" occurs when several processors within the same architecture
are running different, loosely coupled processes or programs.  These
programs/processes are NOT closely related to each other, although they may
communicate with each other via IPC or shared memory or such.

"Parallel Processing" is the decomposition of a single program or process
into several smaller, tightly coupled processes.  These are closely related
and use more sophisticated techniques for IPC such as spin locks, barrier
setting etc.

As stated above, either of these definitions can be extended to cover the
other if you please.  I show a picture with these definitions to clear
up any ambiguity in my meaning.  The multi-processing one shows several
processes running disjoint on several processors.  The parallel processing
one shows a large program being broken up, run, and then being put back
together.



John O'B


John J. O'Brien
ISCP (Integrated SCP)
ctt!johno or johno@ctt
RRC 4B-307
699-8788

johno@dduck.ctt.bellcore.com (John OBrien) (09/11/89)

In article <83733@pyramid.pyramid.com> csg@pyramid.pyramid.com (Carl S. Gutekunst) writes:
>In article <11500@boulder.Colorado.EDU> rsk@boulder.Colorado.EDU (Rich Kulawiec) writes:

>Encore, Elxsi, and Pyramid) run symmetric multiprocessors, while NCR (and
>CCI, DEC, and a bunch of others) run master/slave. 

Remove DEC from the above list for the 6000 series (62x0, 63x0, and 64x0)
The 8650 and 8800 are master-slave.

>Generally, if your objective is to do "real" parallel processing, buy a ma-
>chine designed for the job: Multiflow, BB&N Butterfly, et al. 

Depends, much of my work has centered on the use of parallel processing
in "commercial" applications.  That is, applications that traditionally
were not thought of as "parallel processing" applications.  In that light,
use of the "traditional" parallel processing systems is not necessarily the
best choice.  Bunches of 1 bit processors don't do me a whole lot of good
if my application is not extremely computationally intensive.




John O'B


John J. O'Brien
ISCP (Integrated SCP)
ctt!johno or johno@ctt
RRC 4B-307
699-8788

rich@sendai.sendai.ann-arbor.mi.us (K. Richard Magill) (09/12/89)

parallel vs multiple.  I don't know about the strict definitions but
the following are practical differences.

Both types of systems have multiple cpu's.  The first question to ask
is whether any cpu can do i/o or if only a particular cpu can do i/o;
(this would clearly be a master/slave system).

The second question is rather complex.  You want to know whether there
is a single run queue, or one per processor and when processes are
assigned to processors.  A clue to this is how many processors can be
in kernel code at one time.  Some systems have multiple run queues but
only assign processes to processors on fork(2).  Others have a single
run queue and assign processes to processors at run time.  ie,
processes can play shell games (like the old 3 card monte) across all
of the available processors.

Pretty much any other attempt to use the parallelism beyond this
requires operating system extensions beyond 4.3bsd and sometimes
compiler support.  There is currently debate as to whether a compiler
"should" produce code that implements parallel algorythms rather than
single threaded code.  The rudimentary facilities for experimenting on
these kinds of algorythms are available on DYNIX.

The real (read practical) win of sequent these days is it's
robustness.  In terms of processing power, it isn't a very deep
machine, ie, no single thread runs very fast, but it is a *very* wide
machine.  That is, you can put large amounts of memory into it, the
scheduler is *much* better that the standard bsd or usg schedulers,
the disks can be *very* fast so paging is a lower overhead than most
machines, and performance degrades very gracefully.

Sequent is the kind of machine you want if you need to support
hundreds of users at the same time with relatively small amounts of
average cpu / user.

I should also point out that sequent hardware and software is noted in
the industry for it's solidity.  That is, very few bugs, and the stuff
doesn't break very often.

(if you are interested in the down side, let me know.  I can flame
them into the ground as well as plug them.)

xoxorich.