[comp.unix.wizards] Unix on a VAXCluster ??

roland@sm.luth.se (Roland Hedberg) (02/24/88)

The subject says it all, is it possible to run Unix on a Vax cluster.
I have nearby a small cluster ,one 11/750 and one 11/780 currently running
Vax/VMS 4.6 , and have been asked if it would be possible to change to unix
without break up the cluster.

I would prefer 4.3BSD but if that isn't possible I might consider something
else.

Have anyone tried ?

-- Roland

=============================================================================
Internet : roland@umecs.umu.se , roland@omega.luth.se
Bitnet : rhg@seumdc51

jonathan@pitt.UUCP (Jonathan Eunice) (02/28/88)

In article <985@luth.luth.se> roland@sm.luth.se (Roland Hedberg) writes:
>is it possible to run Unix on a Vax cluster?

NO.   A VaxCluster is a confederation of Vax machines sharing common
resources.  It is implemented as an extension to VMS, plus some
connecting hardware.  It is not portable to other operating systems.

(This is not really a wizards question.  Followups directed to 
comp.unix.questions.)

------------------------------------------------------------------------------
Jonathan Eunice                 ARPA:  jonathan%pitt@relay.cs.net
University of Pittsburgh               jonathan%pitt@cadre.dsl.pittsburgh.edu
Dept of Computer Science        UUCP:  jonathan@pitt.UUCP
(412) 624-8836                BITNET:  jonathan@pittvms.BITNET

chris@trantor.umd.edu (Chris Torek) (02/28/88)

In article <985@luth.luth.se> roland@sm.luth.se (Roland Hedberg) writes:
>The subject says it all, is it possible to run Unix on a Vax cluster.

The next major release of Ultrix (3.0) is supposed to support
clusters.  (Never having used VMS clusters before, I am curious as
to why one would not want to `break up the cluster'.  What exactly
does using VMS cluster hardware buy you over using 4BSD networking,
aside from `well we already have it so we want to use it' [valid,
but to me irrelevant]?)
-- 
In-Real-Life: Chris Torek, Univ of MD Computer Science, +1 301 454 7163
(hiding out on trantor.umd.edu until mimsy is reassembled in its new home)
Domain: chris@mimsy.umd.edu		Path: not easily reachable

jbs@eddie.MIT.EDU (Jeff Siegal) (02/28/88)

In article <2359@umd5.umd.edu> chris@trantor.umd.edu (Chris Torek) writes:
>The next major release of Ultrix (3.0) is supposed to support
>clusters.  (Never having used VMS clusters before, I am curious as
>to why one would not want to `break up the cluster'.  What exactly
>does using VMS cluster hardware buy you over using 4BSD networking,
>aside from `well we already have it so we want to use it' [valid,
>but to me irrelevant]?)

If by 4BSD networking you mean Ethernet, "cluster hardware" or CI, is
much faster.  Also, it lets you hook up to DEC's HSC70 (or the slower
HSC50) mass storage controller.  I don't know if this will apply to
Ultrix users, though, since the HSC70 somehow provides support for the
VMS (Files-11) file system (or does/will Ultrix support mounting a F11
disk?).

Jeff Siegal

chris@trantor.umd.edu (Chris Torek) (02/28/88)

>In article <2359@umd5.umd.edu> I wondered
>>... What exactly does using VMS cluster hardware buy you ...

In article <8309@eddie.MIT.EDU> jbs@eddie.MIT.EDU (Jeff Siegal) answers:
>If by 4BSD networking you mean Ethernet, "cluster hardware" or CI, is
>much faster.

Really?  A typical VAX cpu has quite a bit of trouble keeping up
with the mere 10 Mb/s on the Ethernet.  Making the data paths faster
(70 Mb/s, I believe) is not likely to do much unless you also use
a low-overhead protocol; such are available for BSD.  (We have never
really felt the need for one here.  I suppose it might be nice for
rdump.)

>Also, it lets you hook up to DEC's HSC70 ... I don't know if this
>will apply to Ultrix users, though, since the HSC70 somehow provides
>support for the VMS (Files-11) file system

(I have heard that it uses RMS, or something akin, internally.  Of
course, since it is really just a PDP-11, you could reprogram the
thing.  I wonder whether it boots via DECNET?)

>(or does/will Ultrix support mounting a F11 disk?).

Yes.  DEC have support for the ODS-II file system (to be pronounced
`odious too' :-) ) via their GFS now.

Most of this is, of course, rumour picked up at USENIXes and the
like; contact DEC marketing for solid (hah!) information.
-- 
In-Real-Life: Chris Torek, Univ of MD Computer Science, +1 301 454 7163
(still on trantor.umd.edu because mimsy is not yet re-news-networked)
Domain: chris@mimsy.umd.edu		Path: ...!uunet!mimsy!chris

ka@june.cs.washington.edu (Kenneth Almquist) (02/28/88)

Chris Torek asks:
	What exactly does using VMS cluster hardware buy you over using
	4BSD networking, aside from `well we already have it so we want
	to use it' [valid, but to me irrelevant]?

The cluster hardware allows you to share files between systems, which
the straight 4BSD networking sortware doesn't permit.  You can share
files using software packages such as NFS, but one of the disadvantages
of these packages is that every time you take down a machine, the files
on the disks which are physically connected to that machine become
inaccessible.  The VAXCluster hardware solves this problem by allowing
any machine to communicate with any disk.  The flip side of the coin is
that if the VAXCluster dies then all your systems become unusable, but
the VAXCluster hardware is pretty reliable.
				Kenneth Almquist
				ka@june.cs.washington.edu

jqj@uoregon.UUCP (JQ Johnson) (02/28/88)

Chris Torek has started a rumor:  "Ultrix 3.0 will support clusters".  Now,
what does this really mean?  What would we WANT it to mean?  Some ideas:

1/ use of CI bus for Ultrix networking, including NFS
2/ private (mounted by 1 cpu only) Ultrix-format volumes on an HSC
   (shared access through NFS, of course)
3/ directly shared Ultrix-format volumes on an HSC
4/ access to Files-11 volumes on an HSC, presumably shared with other
   VMS nodes in the cluster
5/ complete cluster membership, including shared files and all other
   cluster-global resources
6/ something else

I'm sure the rumor doesn't mean (5).  My guess is that the rumor means
(1) or perhaps (1+2).  Can anyone comment who knows more than Chris
and I do?

dhesi@bsu-cs.UUCP (Rahul Dhesi) (02/29/88)

>>>... What exactly does using VMS cluster hardware buy you ...

The VAXcluster is not an alternative to networking.  In fact most
VAXcluster installations also run DECnet at the same time.

A VAXcluster is essentially a number of CPUs having concurrent access
to a number of disk drives.  There is no communication between the CPUs
other than the availability of a cluster-wide lock facility, and the
ability to have batch jobs submitted from one CPU for execution on
another.  I may have missed one or two other minor things you can do
across CPUs.

To have interprocess communication between CPUs one uses DECnet,
which is rather clunky and slow, and not always secure.

One serious disadadvantage of a VAXcluster over Ethernet is that only
DEC-manufactured equipment can be easily used in a VAXcluster, thus
locking you into one product line.
-- 
Rahul Dhesi         UUCP:  <backbones>!{iuvax,pur-ee,uunet}!bsu-cs!dhesi

pavlov@hscfvax.harvard.edu (G.Pavlov) (02/29/88)

In article <985@luth.luth.se>, roland@sm.luth.se (Roland Hedberg) writes:
> The subject says it all, is it possible to run Unix on a Vax cluster.

  Up to apx. 1 year ago, the expectation was that DEC would be "kind" enough
  to offer this.  It has since said that it won't.

   .... what else could you expect from the only major workstation/supermicro
   company that does not offer cpu board upgrades ? (1/2 :-)  )

   greg pavlov, fstrf, amherst. ny.

aap@vilya.UUCP (PARKER) (03/03/88)

In article <985@luth.luth.se>, roland@sm.luth.se (Roland Hedberg) writes:
> The subject says it all, is it possible to run Unix on a Vax cluster.
> I have nearby a small cluster ,one 11/750 and one 11/780 currently running
> Vax/VMS 4.6 , and have been asked if it would be possible to change to unix
> without break up the cluster.
> 
> I would prefer 4.3BSD but if that isn't possible I might consider something
> else.
> 
> Have anyone tried ?
> 
> -- Roland
> 
> =============================================================================
> Internet : roland@umecs.umu.se , roland@omega.luth.se
> Bitnet : rhg@seumdc51
I don't know anything about BSD but we are running System V Release 2
on a VAX cluster consisting of a 785 and an 8650.  The necessary software
was supplied by and is supported by DEC.  (Unfortunately, they only will
do this for AT&T and the Bell Operating Companies, not anyone who wants it.)

Advantages of this configuration?  You don't really have shared discs,
as you do with VMS, but you can mount one machine's drives read-only on
the other machine.  You can also put tape drives on the cluster, so for
example, both of our machines can share the TA-78 high performance tape
drives.  The big win for us is that, if one cpu goes down, we can quickly
remount the drives on the other machine so that our users can continue 
working.  There is also a facility for cpu-to-cpu communication over the
cluster interface, but we don't use that so I can't say too much about it.

I hope this clarifies the issue somewhat.

lynn@engr.uky.edu (H. Lynn Tilley) (03/06/88)

In article <1616@uoregon.UUCP> jqj@drizzle.UUCP (JQ Johnson) writes:
>Chris Torek has started a rumor:  "Ultrix 3.0 will support clusters".  Now,
>what does this really mean?  What would we WANT it to mean?  Some ideas:
>
>1/ use of CI bus for Ultrix networking, including NFS
>2/ private (mounted by 1 cpu only) Ultrix-format volumes on an HSC
>   (shared access through NFS, of course)
> My guess is that the rumor means (1) or perhaps (1+2).

From what I have heard, Ultrix now has NFS and TCP/IP bundled with it (or will
have shortly).  But to be able to run all the real nice things that a DEC
cluster has; fault tolerance computing , shared disk and controllers, ability
of the machines to place your job (intact) from a machine that is crashing onto
a machine that is still running and a few of the other nice things are only 
supported on VMS machines.  The problem, as DEC tells it, with implementing 
many of the cluster features on ultrix machines is that UNIX does not support 
the immediate interupts that VMS does.  Also, you have to run DECnet which 
excludes everyone else from sharing in any of these features.  The way that 
they may be trying to get Ultrix into the cluster sites is to run Ultrix on top
of VMS.  I may be wrong but I am certain I will be told if I am :).

While we are at this I would like to gather some information.  Around here we
have some Dec/Vax equipment (pdp11/44, 11/23, microvaxs, 11/780's etc) but all
of them are running BSD4.XX and not Ultrix.  I would be interested in gathering
some statitics on whether or not this is a typical arrangement.  If you would 
e-mail me site specific information such as machine types, operating system
(BSD or Ultrix), some reasons for the choice (if you know), etc. I would
appreciate it.  If anybody is interested I will post the results on the net.
Thanks in advance.

-- 
    |   Henry L. Tilley                 UUCP: {cbosgd|uunet}!ukma!ukecc!lynn
    |   University of Kentucky          CSNET: lynn@engr.uky.edu       
    V   Engineering Computer Center     BITNET: lynn%ukecc.uucp@ukma  
    O   voice: (606) 257-1752           ARPANET: lynn@a.ecc.engr.uky.edu  

haral@unisol.UUCP (Haral Tsitsivas) (03/07/88)

In article <8309@eddie.MIT.EDU> jbs@eddie.MIT.EDU (Jeff Siegal) writes:
>In article <2359@umd5.umd.edu> chris@trantor.umd.edu (Chris Torek) writes:
>>The next major release of Ultrix (3.0) is supposed to support
>>clusters.  
>...  I don't know if this will apply to
>Ultrix users, though, since the HSC70 somehow provides support for the
>VMS (Files-11) file system (or does/will Ultrix support mounting a F11
>disk?).


If I remember correctly, I saw an HSC70 running on a cluster of ULTRIX
machines at the DEC booth in Dallas (at the recent UniForum and USENIX
conferences).  This may actually be done using their GFS filesystem...

--Haral Tsitsivas
  UniSolutions Associates
  (213) 641-6739
  ...!uunet!scgvaxd!ashtate!unisol!haral

mangler@cit-vax.Caltech.Edu (Don Speck) (03/20/88)

In article <369@vilya.UUCP>, aap@vilya.UUCP (PARKER) writes:
> I don't know anything about BSD but we are running System V Release 2
> on a VAX cluster consisting of a 785 and an 8650.
>
>   [...]		  you can mount one machine's drives read-only on
> the other machine.

People had been accomplishing the same thing with dual-ported SMD
disks (or SI controllers) years before the HSC-50 was introduced,
and at far less cost.

Don Speck   speck@vlsi.caltech.edu  {amdahl,ames!elroy}!cit-vax!speck

ron@topaz.rutgers.edu (Ron Natalie) (03/22/88)

Stupid example.  You can mount the disks read/write on all the
machines.  That's why the HSC50 costs so much, it is doing to
multi-CPU access / automatic backup/shadowing features in the
controller.

-Ron