[comp.sys.sequent] Miscellaneous Comments and Inquiries

rsk@j.cc.purdue.edu (Wombat) (05/26/87)

==========
Item 1:
==========
From: suhler@im4u.utexas.edu (Paul A. Suhler)
Subject: Re: Request for Sequent experiences (Msg. 2.1)
Organization: Univ. of Texas Elec & Comp Engr Dept.

I've been using the UT CS Dept's Sequent Balances for the last year and
a half.  Originally there was a 12-proc Balance 8000 used primarily for
Unix cycles.  Recently this was upgraded to a 16-proc B21000 and a 10-proc
B21000 for research only was added.  My work has involved implementing our
task-level data flow language and using that for measuring parallel program
performance.

Sequent originally designed their machines as Unix machines, not as
research vehicles.  In that respect, they are excellent, having very
reliable hardware and software.  We've had only a couple of crashes
(if that many) in two years.

The drawbacks for research are that a lot of the things you need to do
are taken out of your hands by the Dynix operating system unless you're
the superuser.  Some things I'd like to do that I can't are to lock
processes onto processors, flush caches, and change virtual memory
tuning parameters.  All of these require superuser privileges, which
also gives you the run of the machine, such as messing up other people's
files.  Dynix needs an intermediate level of privileges that permits
some groups to do these things, but not the other superuser stuff.
As it is, the normal user always has the operating system in the
way, preempting processes, changing process priorities, running daemons,
etc.

There's also been a need for a single high-resolution clock to serve
as a source of timestamps; the 10-ms Unix clock isn't nearly fine enough.
Sequent came up with a kludge involving counters on the CPU cards to
give 100-usec time, but the counters tend to lose synch.  They've
also built a 1-usec timer card (just what we need) but we haven't
seen it yet.

I suspect that the Encore machine suffers from some of the same O/S
interference problems as Sequent's, but I have no actual experience
programming that machine.  We also have a six-processor Flex 32,
but as it's ATT System V Unix, our Berkeley-oriented (and otherwise
overworked) system staff is loathe to work on it.  The Flex is
more of a bare-bones machine, which means that you have lots of
control over the hardware, but weak software to support you.

You might write to Jit Biswas (jbiswas@im4u.UTEXAS.EDU), as he's
worked more with the Flex than anyone else here.

Good luck; I hope this has been helpful.

==========
Item 2:
==========
From: "Michael D. Kersenbrock" <michaelk%copper.tek.com@RELAY.CS.NET>
Subject: Re: Request for Sequent experiences (Msg. 2.1)
Organization: Tektronix, Inc., Beaverton, OR.

The machine I'm writing to you on ("copper") is a Sequent Balance 8000 that
has 6-processors in it.  We typically have between 50 and 60 users on it
during the day, and we have virtually no "CPU-loading problem" that our
Vaxen (780's) had (before everybody went to use the Sequent because it
would actually echo characters as you typed them (Vaxen when loaded, tends
to pause now and then (as much as half a line of type-ahead before echo))).

Our "make" has a "-P" option that multi-processes the "make".  Real nice.

I'm just a (very happy) user, not a system administrator, so I can't
really say how it is to "run".  I can say, that among my peers, that
most people like to use "copper" over our two Vax 780's, our two Vax 750's
or our many MicroVaxes (for most tasks).  Dynix seems to be easy to port
to.  I usually just compile things as if I were under Berkley Unix, and it
all works fine the first time.  I understand that our recent upgrade of
Dynix has AT&T Unix compatibility stuff partially implemented.

==========
Item 3:
==========
From: dartvax!cmi.UUCP@seismo.css.gov (Theo Pozzy/R. Green)
Subject: Re: Request for Sequent experiences (Msg. 2.1)
Organization: Corporate Microsystems, Inc.

We at Corporate Microsystems have a software package named
MLINK (general purpose asynchronous communications).  We have
ported in to a large number of different Unix and Xenix systems.
As the person in charge of porting, I have a good idea what
the performance of a lot of these systems is.

We recently ported our software to the sequent Balance 21000,
running 6 processors.  For the fun of it, I modified our
"makefile" to take advantage of the parallel processing (they
have a modified "make").  We started the compiles, and they
finished in an unbeleivable 4 minutes and 55 seconds, which
which was fully twice as fast as any other system I've
ever  seen, including a slew of Vaxen, Gould, etc. systems.
I have been thoroughly impressed with the system, and
they have done a good job of tackling the sticky BSD/SYS5
support (they run both, simultaneoulsy).

==========
Item 4:
==========
From: George Cross <cross%wsu.csnet@RELAY.CS.NET>
Subject: Sequent vs. Encore

Hi,
  We are considering two roughly comparable Sequent and Encore models;
8 processor of 32032; around $150K. Would appreciate any comments on
these machines including maintenance response, quality of the 4.2
port, and reliability from users.  Planned use is for parallel
processing research with a small number of knowledgeable users. If I
get enough responses, I will share them.  Thanks.
 
==========
Item 5:
==========
From: "Vic Abell" <abe@j.cc.purdue.edu>
Subject: Re: Sequent vs. Encore
Organization: Purdue University Computing Center

We have two Sequent B21K systems.  The main one is used for undergraduate
instruction and has 12 processors, 24 meg of memory and 3 single Eagles.
It has recently recorded simultaneous login counts of 140+ with load
averages 3 and under. Our second B21K will be a printer server.  The
department of Computer Sciences at Purdue also has a B21K that they
are using for research into parallel computing.

We are very pleased with the hardware and software.  The 4.2BSD port is
excellent and we found movement of 4.2 programs from a VAX-11/780 very
easy.  It's a bit harder now that the VAXs are running 4.3BSD, but not
much.

Vic Abell, Assistant Director

==========
Item 6:
==========
From: Scott Menter <escott%deis.UCI.EDU@rome.uci.edu>
Subject: Sequent Users' Group:  Where to Turn
Organization: UCI ICS Computing Support Group

Some of you may have received, as I did, an invitation to join the
Sequent Users' Group.  This invitation included a letter and an
application form, but unfortunately, in my case, it didn't include an
indication of where to return the application!  So I made 2 calls and
got the info: you return it to Sequent:

	Sequent Computer Systems, Inc.
	15450 SW Koll Parkway
	Beaverton, Ore 97006-6063

	Attn: Users' Group

Did anybody else get this, and not get a return envelope?

==========
Item 7:
==========
From: galen@uxa.cso.uiuc.edu (galen arnold)
Subject: sequent

  My name is Galen Arnold .  I am an operator at the University of Illinois.
I operate a Sequent balance 8000 configured with six processors.  The machine
as is supports several computer science courses.  It operates well under a load
( about 50 users ) and does not bog down like a single cpu machine.
  The main drawback is that Sequent does not give you the source code for unix
routines only the executibles.  If you need to modify things you may have
to write your own.
  Overall the machine is a good performer.  It blows the Pyramid 90x's standing
next to it away when there are a lot of users on the machines.

==========
Item 8:
==========
From: Scott Menter <escott%deis.UCI.EDU@ROME.UCI.EDU>
Subject: Heavily Loaded Balance Hardware
Organization: UCI ICS Computing Support Group

Hi.  Does anybody out there have a Balance system with the following
general features?

	o	Over 20 CPUs
	o	Over 125 simultaneous users

I'm interested in hearing about performance curves for such
configurations.  It'd be nice, too, if anybody's taken a look at
response times for a given number of users and CPUs, over various
amounts of memory.

Note that we already have a couple of Balance 21k systems here, so I'm
not looking for general comments;  I just need to know how they
perform when you start pushing the limits.  Did anybody, for example,
start with one chassis with lots and lots of CPUs, but decide to go
with the same MIPs spread over 2 or more chassis later?  I'm guessing
there might be an I/O bottleneck, which is why I'm only interested in
responses from sites with lots of simultaneous users.

I'll summarize, if there's interest.

By the way, I went to an LA area Usenix meeting yesterday where the
speaker was Steve [something] from Sequent.  The topic was the new
Symmetry hardware, due to ship (according to them) by 4th quarter this
year.  Looks pretty neat.  Apparently they've revised their estimates
of the CPU up from 3 MIPs (where VAX 11/780 == 1 MIP) up to 4 MIPs.

Thanks in advance for your responses -- it'll help us here make a
decision about new hardware.