[comp.unix.wizards] Data Redundancy made simple

ruben@bcstec.boeing.com (Reuben Wachtfogel) (06/26/91)

I'm working on a ring of >100 HP/APOLLOs and have a single
node that stores several sensitive data files that must
be available for our major application to work.

I could store dual copies of these files on 2 seperate nodes and 
code the application to perform all updates of
these files on this MASTER node as well as a BACKUP MASTER node
so that if the MASTER were vaporized, the BACKUP could be slipped
in with minimal downtime.

My question is: 
	
         'Is there some application transparent way to achieve
          data redundancy in a unix network ?'

I would think that a DEVICE DRIVER could be written to accomplish
this.  Is there an elegant way ?  SYS5 Streams ?  Does NFS help ?  
Well Wizards, what say You ?

--------------------------------------------------------------------

Please respond here or to ruben@dsp35001.boeing.com 

                                                       Thanks

If you don't like the topic, ignore it or post your flames to /dev/null.

DISCLAIMER: This posting was randomly generated by chimp #2306893743823
            and does not represent the opinion of any large Aerospace 
            company starting with a 'B'. 

jasonp@cunix7.prime.com (Jason Pascucci) (06/26/91)

In article <992@bcstec.boeing.com>, ruben@bcstec.boeing.com (Reuben
Wachtfogel) writes:
|> I could store dual copies of these files on 2 seperate nodes and 
|> code the application to perform all updates of
|> these files on this MASTER node as well as a BACKUP MASTER node
|> so that if the MASTER were vaporized, the BACKUP could be slipped
|> in with minimal downtime.
|> 
|> My question is: 
|> 	
|>          'Is there some application transparent way to achieve
|>           data redundancy in a unix network ?'
|> 

I haven't heard about anything which would do this for you,
but here are a few ideas which might solve your problem:

NFS helps a great deal. I don't know if your current filesystem
scheme will do this, but when you make an NFS request to a site which is
down, it will hang, not crash. This is a big plus from your point of
view.
(I personally hate this, esp. with my Sun. Ick)

Now, there are a few things you could do. How about setting
up a 'Secondary' system which has a process that constantly checks to
make 
sure the 'Master' is alive. If it isn't, have it restart TCP/IP with
the address of the 'Master', and NFS will resume talking with you
instead of him. When the Master boots back up, have it copy over the 
current data, and Re-Shutdown the Slave, and have the Master take over 
yet again. Admittedly, it's not elegant, but it should do what you want
with a minimum of work.

If you want to throw a little hardware at this, You could combine the
Network stuff with Dual Ported SCSI, assuming the OS supports Priority 
Select. This would allow you to use extremly up-to-date data, and is a 
large win. This doesn't guard against disk failures, though. Another 
solution is to have backups on the 'Secondary' system, and still
do the dual porting. You may also want to look into Third
party vendor (I don't think HP's unixii do this, but I don't claim
to know for sure) who do disk mirroring, and Dual port one half
(Or both) of the mirror. There are all sorts of Hardware 
you can throw at the problem, if you want to spend the money.

Your Milage May vary. These are, of course, only suggestions.

--
Jason R. Pascucci           "Kate Bush Is God!......Oops. Wrong
newsgroup"
jasonp@primerd.prime.com

Disclaimer: My company isn't responsible.

petri@ibr.cs.tu-bs.de (Stefan Petri) (06/28/91)

In article <1991Jun26.112529@cunix7.prime.com> jasonp@cunix7.prime.com (Jason Pascucci) writes:
>In article <992@bcstec.boeing.com>, ruben@bcstec.boeing.com (Reuben
>Wachtfogel) writes:
>|> I could store dual copies of these files on 2 seperate nodes and 
>|> code the application to perform all updates of
>|> these files on this MASTER node as well as a BACKUP MASTER node
>|> so that if the MASTER were vaporized, the BACKUP could be slipped
>|> in with minimal downtime.
>|> 
>|> My question is: 
>|> 	
>|>          'Is there some application transparent way to achieve
>|>           data redundancy in a unix network ?'
>|> 
>
>I haven't heard about anything which would do this for you,

But I have :

What you want is to have a look at rdist(1)

	rdist - remote file distribution program

To keep the contents of your Backup consistent with the Master, and 
at amd(8) to create replicated NFS-Servers for your Users/Clients

	amd - automatically mount file systems

rdist should be contained in any reasonable version of Unix, maybe its also
in the freed parts of the BSD-Sorces (I didnt check that).

Amd is the BSD4.4-Automounter, but it runs on a broad variety of Unix'es

It's operation principles are similar to Sun's automount, but is has more
capabilities, especially to configure Backup Servers that get automagically
used when the Master is down.

Rdist and amd together work like a charm for us since several month now.

amd is available from usc.edu in ~ftp/pub/amd .

S.P.