bill@twg.wimsey.bc.ca (Bill Irwin) (10/03/90)
I've got this scenario where one computer has too much disk used to backup on its 60Mb tape. The sysadm would like to backup a portion of the file system over a direct uucp link onto a system with a 150Mb tape. Neither system has enough free disk space to hold a tar archive on disk while it is being transferred between systems. Let's call the first computer "s60" and the 2nd one "s150". My thought was to run a script on s150 that is passed a list of directories on s60 that need backing up. For each directory the script would send a uucp executable across to s60 that would: find /usr/start/point -type f -print | xargs uucp destination/dir If this would transfer each file in the directory tree to some point on s150, then they would be available to tar on s150. The script would have to wait until the find process running on s60 had finished, then run tar on destination/dir. When the tar had completed, destination/dir would be removed to make room for the next batch of files from s60. The script would basically repeat the above steps for each dir required. 1. start process on remote that transfers files to local 2. tar files on local to 150Mb tape 3. remove files on local 4. goto step 1 This whole wonderful theory depends on having the ability to have TAR *append* the 2nd batch to the end of the first archive. Has TAR developed this ability over the years? The other critical point is having the local script wait for the remote process to finish sending files. I don't know if this is possible using the UUCP programs. I know what you're thinking - get TCP/IP on Ethernet. But there's a side of me that loves challenges like this and the adventure of seeing it come together. Am I wasting my time or does this idea sound plausible? -- Bill Irwin - The Westrheim Group - Vancouver, BC, Canada ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ uunet!van-bc!twg!bill (604) 431-9600 (voice) | UNIX Systems bill@twg.bc.ca (604) 431-4329 (fax) | Integration
cpcahil@virtech.uucp (Conor P. Cahill) (10/03/90)
In article <265@twg.wimsey.bc.ca> bill@twg.wimsey.bc.ca (Bill Irwin) writes: > [ discussion of trying to use uucp to back up one system to another > systems tape drive deleted ] > Why not just develop your own home-grown serial file transfer mechanism that would allow you to do the following: on your system with the disks, run the following command: find..... | tar -cvf - | remotesend ttyaA system Remotesend then opens the serial port, logs in on the other machine with a different login from uucp. This new login will have a new login shell called remoterecv. remotesend will wait for remotercv to start up (some form of minimal handshaking) and then start sending all of it's standard input through the system. Remoterecv reads all of its input and then buffers the data (in big blocks, since the serial link will never be fast enought to keep the tape streaming) and write it to the tape drive. To insure data integrity, the remotesend-remoterecv link could have a minimal packetizing protocol that ensures none of the packets are clobbered. This is off the top of my head and you probably can punch a hole in it, but you get the idea and it shouldn't be hard to implement. -- Conor P. Cahill (703)430-9247 Virtual Technologies, Inc., uunet!virtech!cpcahil 46030 Manekin Plaza, Suite 160 Sterling, VA 22170
bill@camco.Celestial.COM (Bill Campbell) (10/04/90)
In article <1990Oct03.123829.5292@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes: :In article <265@twg.wimsey.bc.ca> bill@twg.wimsey.bc.ca (Bill Irwin) writes: :> [ discussion of trying to use uucp to back up one system to another :> systems tape drive deleted ] :> : :Why not just develop your own home-grown serial file transfer mechanism :that would allow you to do the following: : : on your system with the disks, run the following command: : : find..... | tar -cvf - | remotesend ttyaA system : ...stuff deleted :-- :Conor P. Cahill (703)430-9247 Virtual Technologies, Inc., :uunet!virtech!cpcahil 46030 Manekin Plaza, Suite 160 : Sterling, VA 22170 I played with this approach a little using kermit and it did work albeit slowly. I got the idea while reading the kermit documentation and saw that it could use stdin and stdout for file transfers. I was going to use it to back up a Tandy 6000 file system with about 140 Meg of data, but gave up when I didn't want to tie up my 386's tape drive for the whole weekend :-) Bill. -- INTERNET: bill@Celestial.COM Bill Campbell; Celestial Software UUCP: ...!thebes!camco!bill 6641 East Mercer Way uunet!camco!bill Mercer Island, WA 98040; (206) 947-5591
bill@bilverbilver.uucp (Bill Vermillion) (10/05/90)
In article <265@twg.wimsey.bc.ca> bill@twg.wimsey.bc.ca (Bill Irwin) writes: >I've got this scenario where one computer has too much disk used to >backup on its 60Mb tape. The sysadm would like to backup a portion of >the file system over a direct uucp link onto a system with a 150Mb tape. >Neither system has enough free disk space to hold a tar archive on disk >while it is being transferred between systems. Here's an alternate approach. One client I had went into their 3rd tape on backups, and really complained how long it took to back it up. That was the final thing I needed to convince them to go to CTAR. Now they do unattended late night backups with verify, and with the compression turned on the 2 and but of the 3rd 60 meg tapes all fit on one tape. Don't have anything to do with the product other than having several happy clients with it. -- Bill Vermillion - UUCP: uunet!tarpit!bilver!bill : bill@bilver.UUCP
ssb@quest.UUCP (Scott Bertilson) (10/17/90)
Here's a script I've used to do something close to this on my 3b1 - once over a serial line at 9600 and more recently over a V.32 modem. The basic premise is that UUCP willingly copies from a named pipe on System V. I create 2 named pipes: cd /usr/spool/uucp; /etc/mknod p0 p; /etc/mknod p1 p The script then runs "find" into "cpio" into "compress" into "dd" into each of the named pipes in turn - broken up by "dd" because I had some problems UUCPing many megabytes into a single file. If it blows up part way along, you'll have to make sure there aren't any stray UUCP requests in the queue and start over. Please understand that this is a very crude hack. Please also understand that I have used it several times successfully. It *IS* possible to get UUCP to copy *in* to a named pipe on the remote end, but as you might imagine it can be difficult to catch it by specifying the correct "TM.XXXXX.YYY" filename. ------ X=0 N=0 rm -f /tmp/t.done if test -f /tmp/t.done then echo "Can't eliminate \"/tmp/t.done\". Can't continue." exit 1 fi find `ls -a | sed -e 1,2d -e '/^u$/d'` -depth -print 2>/tmp/t.find | cpio -ovcmauld 2>/tmp/t.cpio | ( compress -v 2>/tmp/t.compress; sleep 30; echo >/tmp/t.done) | while test ! -f /tmp/t.done do X=`expr 1 - $X` N=`expr $N + 1` uucp -r \~/p$X quest\!\~/ssb$N.cpio.Z dd bs=1k count=1000 of=/usr/spool/uucppublic/p$X done -- Scott S. Bertilson ...ssb@quest.UUCP scott@poincare.geom.umn.edu