hurf@theory.tn.cornell.edu (Hurf Sheldon) (10/05/90)
We have two DN10000 with 4 processors, 64mb 2 760mb Maxtors each. They are configured with an ethernet interface only, running Domain OS 10.2, NFS 2.1. We have an HP-UX environment they have to live in. How can I best limit my admin overhead, make a default BSD environment available to my users, use the file/device sharing capabilities of the Domain OS to maximize disk space? Right now the total disk area appears as one space. Is that a good way to do things? What if one disk dies? Are there files spread across both disks? ( I never have had a Maxtor go longer than a 18mos without having some kind of croaking episode ) Can I use my unix format password file? Can I nfs the users area from other non apollo sustems? Can I use a remote backup to use the DAT we have on an HP system? The manuals seem to be saying the systems will have a server-client relationship for system files but it is not at all clear to see how to make that happen. The systems are nice, we are trying to make using them as painless as possible. How would you do it? Help and advice much appreciated. Hurf -- Hurf Sheldon Network: hurf@theory.tn.cornell.edu Program of Computer Graphics Phone: 607 255 6713 580 Eng. Theory Center, Cornell University, Ithaca, N.Y. 14853
thompson@PAN.SSEC.HONEYWELL.COM (John Thompson) (10/05/90)
Boy, what a lot of questions! > We have two DN10000 with 4 processors, 64mb 2 760mb > Maxtors each. They are configured with an ethernet > interface only, running Domain OS 10.2, NFS 2.1. Reasonably nice machines. It appears that you are concentrating on raw speed from the Apollo systems you have. > We have an HP-UX environment they have > to live in. How can I best limit my admin overhead, > make a default BSD environment available to my users, > use the file/device sharing capabilities of the > Domain OS to maximize disk space? Make sure the nodes are catalogued with each other. Just as there are TCP hostnames, the DDS (Domain Distributed System) has nodenames (they have no name.xx.yy.zz format though). By default, in the rc.local file on the system, the Unix hostname is set up to be the node's name. Catalogue this on each node with "ctnode name nodeid", and then record any other Apollo nodes with a "ctnode -update" command (again, on each node). The 2 machines will now be happy to talk with each other. You can create links across the nodes by using the double-slash notation, such as "/bin/ln -s //node1/directory //node2/directory" to create a link on node2 that points at node1. You can just as easily access -- for read, write, execute, whatever -- files (or dirs) on the other node by using the full pathname //othernode/directorypath/filename How and what you link is more-or-less up to you. It depends on what is regularly executed (I believe these objects should be local, since you don't want to depend on another node being up to execute some commands), and what is really large (linking out a 4k file is probably pointless). Whatever you do, do _NOT_ link out the /sys/node_data directory! This is a special directory that is used to store (among other junk) files and dirs that must be local to each node. Depending on what O/S configuration is on the system, you may want to re-load from cartridge tape (into an "authorizaed area") and then install only the BSD-unix system. This'll give you close to pure BSD (no flames please!). It'll also give you a chance to read about, try to use, curse at, and then get to like the "Installing Domain Software with Apollo's Release and Installation Tools." > Right now the total disk area appears as one space. > Is that a good way to do things? What if one disk > dies? Are there files spread across both disks? > ( I never have had a Maxtor go longer than a 18mos > without having some kind of croaking episode ) It depends. Are they on the same disk controller? (I know that the 2 machines obviously have separate controllers.) If you have only 1 controller/machine, there's little use for disk-striping. The advantage is that you have a large disk volume for storing files. Also, in Apollo-land, process swap-space is on the boot volume. If you have a single volume, then you have a lot of virtual memory swap-space available. If you have 2 controllers per machine, they're probably striped for speed (cylinder-striped). In this case, you get the advantages above _PLUS_ an almost 2-fold increase in disk Xfer rates. In either case, you have the disadvantage that files will get split across disks. Lose one disk and you lose both disks' files. In essense, consider a striped disk to be a single disk. If one goes, it's like losing 1/2 your platters/cylinders/sectors. You can repair this disk slightly more easily than you could a single winchester, but the effect on your data is identical. If you don't need the huge swap-space, and you don't mind slower access (if you do have 2 controllers), I'd suggest 1 volume per drive. This means a reformat is in store for you, along with a reload of the O/S. Do one dn10000 at a time, to avoid booting off cartridge tape (boot diskless off the one you're not reformatting). There was a discussion (about 3 weeks ago?) on disk-striping, in gory detail. > Can I use my unix format password file? Can I nfs the > users area from other non apollo sustems? Can I use > a remote backup to use the DAT we have on an HP system? Yes and no. I believe you can import your HP passwd file into the Domain-OS registry, but you would have to do it regularly to keep changes current. Domain-OS uses a registry server setup (Password-ETC). In my opinion, it is much better than straight Unix. However, it means that password accesses (getpwent et.al.) will access the registry database, NOT an ascii file. I'd suggest checking with HP about getting Password-ETC for your HP systems. Then you can run the registry daemons and share info across your network. (I don't know whether HP is supporting Password-ETC on HP machines yet.) If you can't do that, it's probably most painless to import the password system onto the Apollos, and then having a CRON setup to push the ascii equivs of the files over to your HP systems. This does mean that users could only change passwords on the Apollo systems, and there'd be a lag between the change and the distribution to the HP system. > The manuals seem to be saying the systems will have a > server-client relationship for system files but it is > not at all clear to see how to make that happen. I'm not sure what you/they mean. If you catalog the nodes with each other, they will automagically share files whenever the other node asks. Servicing "foreign" systems like HPs still requires nfs or some other software package like it. > The systems are nice, we are trying to make using them > as painless as possible. How would you do it? Throw out the HP-UX machines and go with pure Apollo! :-) Good Luck! John Thompson (jt) Honeywell, SSEC Plymouth, MN 55441 thompson@pan.ssec.honeywell.com As ever, my opinions do not necessarily agree with Honeywell's or reality's. (Honeywell's do not necessarily agree with mine or reality's, either)