DUGAL%ECL1@STAR.STANFORD.EDU (07/18/87)
We here at the University of Rhode Island are trying to review our options for increasing the overall performance of our VAX 11/780. One idea brought up was to LAV the 780 to multiple uVAX IIs. What exactly does this buy us? Can we set up terminal servers to share the 780 and uVAXs just as we would for a normal cluster of 780s? Can we share disks and have one common SYSUAF.DAT? We've got three RM05s and one RA81 that we'd like to share across systems. Any help anyone can give me would be greatly appreciated. +---------------------------------------------------------------------------+ | David G. Dugal University of R.I., Engineering Computer Laboratory | | System Manager 306 Bliss Hall, Kingston, RI 02881 (401) 792-2488 | | | | UUCP: allegra!rayssd!uriecl!dugal | | Internet: dugal%ecl1.SPAN@star.stanford.edu | | SPAN/HEPnet: ECL1::DUGAL -or- 6334::DUGAL | | PSI: PSI%31103210735::ECL1::DUGAL | | Easylink: 62926791 TWX: 650-299-5584 MCI Mail: 299-5584 | +---------------------------------------------------------------------------+
LEICHTER-JERRY@YALE.ARPA (07/21/87)
We here at the University of Rhode Island are trying to review our
options for increasing the overall performance of our VAX 11/780. One
idea brought up was to LAV the 780 to multiple uVAX IIs. What exactly
does this buy us?
It buys your more CPU power. The resulting system is suitable for multi-
stream applications in which dynamic load balancing is not an issue. That
is: You will be able to spread users and jobs across a number of CPU's, so
if your system is loaded up because it always has several jobs running at
once, you'll win. No one job will, in and of itself, run any faster (except
for lack of competition with other jobs); and once a job is created, it has
to stay on the system it was created on. The 780, since it will serve all
the disks to the LAVC, will continue to be a single point of failure - you
won't gain here.
Can we set up terminal servers to share the 780 and
uVAXs just as we would for a normal cluster of 780s? Can we share disks
and have one common SYSUAF.DAT? We've got three RM05s and one RA81 that
we'd like to share across systems.
Yes to both of these. The LAVC software is identical to "big cluster" soft-
ware except at the very lowest levels, where it uses the Ethernet instead of
the CI.
Note that disk accesses from the uVAXes will be somewhat slower than equiva-
lent access on the 780, though not by all that much. Since you are talking
about uVAXes, not VAXStations, you'll certainly have local disks; putting
swapping an paging files on them will help. You can put other stuff there,
too, and even serve the local disks to the cluster.
There will be some load on the 780 from serving the micros. How much depends
on how many micros you will have. You should experiment, but with more than
a couple of micros, your best configuration may be to not use the 780 for
interactive jobs at all - reserve it for disk serving and batch jobs.
(A configuration like this really brings home the advances in this field.
The 780 "feels" like the big machine in the configuration, but in fact it's
no faster and probably has about the same amount of memory as the micros. The
only way in which it's "big" is in available I/O bandwidth, and the use of
that bandwidth in faster, bigger disks.)
-- Jerry
-------carl@CITHEX.CALTECH.EDU (Carl J Lydick) (07/22/87)
> We here at the University of Rhode Island are trying to review our > options for increasing the overall performance of our VAX 11/780. One > idea brought up was to LAV the 780 to multiple uVAX IIs. What exactly > does this buy us? Can we set up terminal servers to share the 780 and > uVAXs just as we would for a normal cluster of 780s? Can we share disks > and have one common SYSUAF.DAT? We've got three RM05s and one RA81 that > we'd like to share across systems. The current configuration of the 780 was (apparently) not specified (I haven't seen the original posting yet; just Jerry Leichter's response). Here in high energy physics at Caltech, we faced a similar problem: our 780 running VMS had about the same responsiveness you'd expect from a 750 running UNIX (i.e., occasional 30-second waits for the $ prompt in response to a carriage return at the DCL level, two-minute or greater delays executing a simple login.com, and so forth). We found the real problem to be that since we acquired the VAX in the late 70's, VMS had grown substantially, but our 780 hadn't. A memory upgrade from 4 MB to 16 MB (8 would have been enough, but we managed to get two companies to bid against each other and got 16 for the price originally quoted for 8) roughly doubled throughput! :-)