D. Allen [CGL]" <idallen@watcgl.waterloo.edu> (06/08/90)
DECsystem 5400, Ultrix 3.1C, RA90 disk, one user (me). Watch the elapsed real times here. Here's a plain root dump to tape (TK70): # time dump 0 / DUMP: Date of this level 0 dump: Thu Jun 7 21:17:43 1990 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/rra0a (/) to /dev/rmt0h DUMP: Mapping (Pass I) [regular files] DUMP: Mapping (Pass II) [directories] DUMP: Estimates based on 1200 feet of tape at a density of 10240 BPI... DUMP: This dump will occupy 1103 (10240 byte) blocks on 0.13 tape(s). DUMP: Dumping (Pass III) [directories] DUMP: Dumping (Pass IV) [regular files] DUMP: 57.43% done, finished in 0:03 DUMP: 1103 tape blocks were dumped on 1 tape(s) DUMP: Tape rewinding DUMP: Dump is done 0% real=9:29 usr=0.3 sys=1.9 rd=0 wr=4 mem=56 pg=3 rec=17 sw=0 sig=0 cs=2776 Here's the identical root dump piped to dd to tape: recorder# mt rew recorder# time sh -c "dump 0f - / | dd bs=32k rbuf=2 wbuf=2 of=/dev/rmt0h" DUMP: Date of this level 0 dump: Thu Jun 7 21:28:18 1990 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/rra0a (/) to standard output DUMP: Mapping (Pass I) [regular files] DUMP: Mapping (Pass II) [directories] DUMP: Estimated 11295744 bytes output to Standard Output DUMP: Dumping (Pass III) [directories] DUMP: Dumping (Pass IV) [regular files] DUMP: 11295744 bytes were dumped to Standard Output DUMP: Dump is done 0+2780 records in 0+2780 records out 4% real=3:34 usr=0.7 sys=8.7 rd=1 wr=8 mem=37 pg=2 rec=17 sw=0 sig=0 cs=10111 That's almost three times faster! Why can't dump be as good as dd? Dumps are of major importance; I would have thought that dump would be the most clever user of the tape drive. I can't believe this. Am I missing something? I must be missing something. -- -IAN! (Ian! D. Allen) idallen@watcgl.uwaterloo.ca idallen@watcgl.waterloo.edu [129.97.128.64] Computer Graphics Lab/University of Waterloo/Ontario/Canada
grr@cbmvax.commodore.com (George Robbins) (06/08/90)
In article <1990Jun8.014252.15749@watcgl.waterloo.edu> idallen@watcgl.waterloo.edu (Ian! D. Allen [CGL]) writes: > DUMP: This dump will occupy 1103 (10240 byte) blocks on 0.13 tape(s). > recorder# time sh -c "dump 0f - / | dd bs=32k rbuf=2 wbuf=2 of=/dev/rmt0h" ^ > That's almost three times faster! Why can't dump be as good as dd? > Dumps are of major importance; I would have thought that dump would be > the most clever user of the tape drive. I can't believe this. Am I > missing something? I must be missing something. Mostly that dump doesn't document the -b switch to let you specify a more efficient block size and restore doesn't support it, which makes it a pain to restore tapes with non-standard block sizes. -- George Robbins - now working for, uucp: {uunet|pyramid|rutgers}!cbmvax!grr but no way officially representing: domain: grr@cbmvax.commodore.com Commodore, Engineering Department phone: 215-431-9349 (only by moonlite)
D. Allen [CGL]) (06/09/90)
>> Dumps are of major importance; I would have thought that dump would be >> the most clever user of the tape drive. I can't believe this. Am I >> missing something? I must be missing something. > >Mostly that dump doesn't document the -b switch to let you specify a more >efficient block size and restore doesn't support it, which makes it a >pain to restore tapes with non-standard block sizes. Even using -b to set a 32K block size (the maximum the drive seems to allow without "write: Error 0" messages), it's still faster to dump to stdout and pipe into dd with wbuf=2 to write the tape. Is this the performance problem mentioned in the 3.1C release notes (item 1.1.12/13)? And you should see how long all this takes using rdump! Since Ultrix dd is supposed to handle multi-volumes, I'd recommend this as an alternative to rdump: % rsh host dump <options>f - <what> | dd wbuf=2 bs=32k of=<tape> Actually, bs=32k doesn't do quite what I want -- it doesn't assemble 32k of output data before writing it. As partial records come through the pipe, dd writes them to tape, wasting space. "ibs=32k obs=32k" works, but makes dump waste *much* time copyng the data. I wrote a simple program that reads and reads until it gets 32k worth of stuff, then does a 32k write using Ultrix nbuf I/O and two output buffers. Using dump to stdout into that program to write the tape is three times faster than using dump with 32k buffers: # time dump 0b 32k / DUMP: 345 tape blocks were dumped on 1 tape(s) DUMP: Dump is done 0% real=7:37 usr=0.3 sys=1.6 rd=1 wr=4 mem=92 pg=2 rec=18 sw=0 sig=0 cs=2037 # time sh -c "dump 0f - / | ./a.out >/dev/rmt0h" DUMP: 11295744 bytes were dumped to Standard Output DUMP: Dump is done 4% real=2:39 usr=0.6 sys=7.2 rd=0 wr=9 mem=45 pg=2 rec=17 sw=0 sig=0 cs=7636 Think how much faster this would be if dump did nbuf I/O most efficiently. -- -IAN! (Ian! D. Allen) idallen@watcgl.uwaterloo.ca idallen@watcgl.waterloo.edu [129.97.128.64] Computer Graphics Lab/University of Waterloo/Ontario/Canada
steve@avalon.dartmouth.edu (Steve Campbell) (06/12/90)
In article <12420@cbmvax.commodore.com> grr@cbmvax (George Robbins) writes: >Mostly that dump doesn't document the -b switch to let you specify a more >efficient block size and restore doesn't support it, which makes it a >pain to restore tapes with non-standard block sizes. It sure is a pain. Still you can use the -b option on dump and then let dd do the unblocking on restores. For example: dump 0fdb /dev/rmt0h 6250 64 /dev/rra1a and then for restores: dd if=/dev/rmt0h bs=64k | restore if - This works and gets much better performance out of a TU81. In one case the elapsed time dropped by about 35%. I haven't tried it on multi-volume dumps. Would someone familiar with version 4.0 care to comment on whether the -b option will be documented and supported by restore in that version? Steve Campbell Dartmouth College