[comp.sys.dec] LAVC and 11/730 and 11/750

scott@stl.stc.co.uk (Mike Scott) (01/21/88)

I am interested in the possibility of an LAVC with 2 uVAX IIs and 2 11/750s. I
am told DEC only support a 750 as boot node, which is not a suitable
configuration for us. 

I cannot see why, apart from questions of bootstrapping the system, a 750
should not be a satellite machine. 

Can anyone throw any light on the problem please, and suggest how this can be
done? 
-- 
Regards. Mike Scott (scott@stl.stc.co.uk <or> ...uunet!mcvax!ukc!stl!scott)
phone +44-279-29531 xtn 3133.

newbery@comp.vuw.ac.nz (Michael Newbery) (01/26/88)

You can't use a 750 or 780 (or 730) as a satellite as they don't have the
microcode to boot themselves off the ether. 
-- 
Michael Newbery

ACSnet:	newbery@vuwcomp.nz  UUCP: newbery@vuwcomp
Une boule qui roule tue les poules.		(Landslides kill chickens)

scott@stl.stc.co.uk (Mike Scott) (02/02/88)

My thanks to all who have replied on this subject.

I'm afraid that as a naive (in cluster terms) user, I asked the wrong
question.  The problems of boot and satellite nodes were not in fact
germane, as I want each machine to have its own system disk, and to
use the cluster really as a disk-sharing means. This is a rather
different problem from persuading a 750 to boot from a remote disk.

As some pointed out, this is easily possible as the LAVC software
performs all the functions of a 'big' cluster. All you have to do is
set (manually) the scssystemid, scsnode, mscp, pe3, pe6 and
vaxcluster sysgen parameters
and reboot. Lo and behold, the machine will cluster with any other
with the same group and password! Each machine will run its own system
software, and behave much as previously, except all the disks can be
made available to the cluster.

It seems that it is possible to cluster in this way any mix of vaxen
although I have in fact only tried this on uVAX IIs as yet.

By the way, does anyone know what pe3 and pe6 actually do? I set both
these as boot_config.com set them, to 1. The other problems noted were
that set cluster/quorum=1 (on a two-node cluster)
didn't work, and during shutdown the cluster
quorum was not recalculated in time to keep the sole remaining node
functioning (I gather the latter problem 'will be fixed in a new
release': we're on 4.6)

-- 
Regards. Mike Scott (scott@stl.stc.co.uk <or> ...uunet!mcvax!ukc!stl!scott)
phone +44-279-29531 xtn 3133.