roy@phri.UUCP (12/10/86)
We currently have a bunch of Sun-3's running 3.1FCS and a Vax running 4.2BSD, soon to be upgraded to Mt Xinu's 4.3 w/NFS. Eventually we might be adding other Unix machines to the net, all running NFS. We run a pretty open system, with the idea being that anybody can sit down at any available device (be it a Sun workstation console, or an ASCII terminal on a serial line to either the Vax or a Sun, or a dial-up line) and see as uniform an environment as possible. I'd like to hear from people who have experience setting up heterogeneous NFS systems. Some of the problems seem pretty straight forward to solve. For example, different machines mount /bin, /usr/bin/, /usr/local/bin, etc. (and /etc :-)) from a file server of the same type. What about home directories? Do you give people a $HOME on each type of machine? It seems like we'll have to do this, if for no other reason, then because we run 2 versions of emacs, and the format of the dot-emacs files are different on the Vax and the Suns. We *could* hack up everybody's dot-login files to check the machine type and make the right symbolic links, but that seems pretty grotty to me. Besides, it would be really nice to be able to tell somebody to look at "~roy/whatever" without having to worry about which "~roy" I mean. How do you deal with big data bases? We have some rather large shared data bases (Genbank and related stuff is about 50 Mbytes, for example) that I'd rather not replicate if I don't have to. Since most of the data base is ASCII, there isn't much problem there, but what about binary index files? One solution would be to share the ASCII parts and have the binary parts be symlinks to the real files in /local/lib/binary, and mount the appropriate /local/lib/binary depending on which machine you are on. Does this seem reasonable? I'm hesitant to get into a situation where data bases are scattered all over the universe with billions of symlinks tying it all together -- sounds like an administrative nightmare. Can anybody think of a better way? What about people doing program development? It would be nice to be able to have a single source (possibly with #ifdef VAX/SUN lines in it) which you could just run make on and have both binaries made automatically, and have the right binary chosen for execution depending on which machine you are on. Any suggestions for easy ways to set that up? -- Roy Smith, {allegra,cmcl2,philabs}!phri!roy System Administrator, Public Health Research Institute 455 First Avenue, New York, NY 10016 "you can't spell deoxyribonucleic without unix!"
eric@osiris.UUCP (Eric Bergan) (12/12/86)
In article <2530@phri.UUCP>, roy@phri.UUCP (Roy Smith) writes: > > How do you deal with big data bases? We have some rather large > shared data bases (Genbank and related stuff is about 50 Mbytes, for > example) that I'd rather not replicate if I don't have to. Since most of > the data base is ASCII, there isn't much problem there, but what about > binary index files? One solution would be to share the ASCII parts and > have the binary parts be symlinks to the real files in /local/lib/binary, > and mount the appropriate /local/lib/binary depending on which machine you > are on. Does this seem reasonable? I'm hesitant to get into a situation > where data bases are scattered all over the universe with billions of > symlinks tying it all together -- sounds like an administrative nightmare. > Can anybody think of a better way? I can't really address the other issues that this article brings up, but we do have some experience with distributed database access. We currently have a very diverse group of machines (Pyramids, Suns, IBM/MVS, MUMPS, PCs) accessing a common database. We don't use NFS for this, but the underlying Remote Procedure Call (RPC) protocol. We define the interface to the database in terms of function calls (look up this given that, change this, etc). Then an RPC server is implemented to support these function calls. Now all a potential client program needs to do is to make RPC's to the server to have access to the data. This solves the replication problem, also allows one to continue having tight locking control, if necessary. Further, by decoupling the client frontend from the database backend, it is possible to change the database management scheme without effecting the frontends. Simply change the server to support the new DBMS, while continuing to support the same RPC function calls. With commercial database systems (particularly those that require a separate "backend" for each client process), one can also reap some memory savings by having several RPC clients "share" the same RPC server. As I said, we have been using this design with great success for connecting production applications together throughout a hospital environment. It definitely cuts down on replication, and provides a well defined interface for radically different systems to access the same database. -- eric ...!seismo!mimsy!aplcen!osiris!eric
collier@charon.unm.edu (Uncia Uncia) (11/03/87)
the UNM computing center is preparing to use NFS on a large scale, campus wide. we don't use it now (except in a few workstation labs). we have a nice broadband/baseband campus network, and a variety of machines and opsys. it is probably a little much to expect or even attempt complete connectivity, but the more the better. i am seeking general advice: which implementations of NFS (if any) for our various systems will give us a filesystem network that will "work", which versions of the NFS standard are compatible with each other, etc; anectdotes and advice concerning actually putting something like this up would also be welcome. we have microvax 2's and 2000's (running 4.3bsd, ultrix 1.2 & 2.0, VMS), vax 780's (running 4.3bsd), an 8650 (running VMS 4.5, with WIN/TCP 3.0), a sequent B8000 (running 3.0 dynix), ibm rt's (running 4.3bsd and AIX), sun 2's, 3's (running sun3.4). -- Michael Collier University of New Mexico Computing Center 2701 Campus Blvd. Albuquerque, NM 87131 ...!ihnp4!lanl!\ unm-la!unmvax!charon!collier ...!cmcl2!beta!/