milo@ndmath.UUCP (Greg Corson) (03/06/88)
Is anyone aware of any good programs for unfragmenting unix disks? (collecting all the files together so they use contiguous blocks). I am looking for programs for both BSD and SYSV systems. Also, does anyone know of a way to get UNIX to allocate a file as contiguous disk space? Preferably, using normal user privleges but a scheme requireing superuser permission would still be useful. Please reply by mail as I don't get in to read news very often. Greg Corson 19141 Summers Drive South Bend, IN 46637 (219) 277-5306 (weekdays till 6 PM eastern) {pur-ee,rutgers,uunet}!iuvax!ndmath!milo
zeeff@b-tech.UUCP (Jon Zeeff) (03/07/88)
It is certainly possible to have a program that you would run on an unmounted file system that would do an in place optimization. Something like the Sys V dcopy, but without the need for another drive. Has anyone heard of such a thing? -- Jon Zeeff Branch Technology, uunet!umix!b-tech!zeeff zeeff%b-tech.uucp@umix.cc.umich.edu
knutson@marconi.SW.MCC.COM (Jim Knutson) (03/08/88)
Dump and restore always worked well in the past for defragmenting a disk. The real question is why you would want to do this on a BSD system (assuming it is 4.2 or greater). For AT&T System X, try your favorite method of backup and restore (cpio I suppose). -- Jim Knutson knutson@mcc.com im4u!milano!knutson
hubcap@hubcap.UUCP (Mike Marshall) (03/08/88)
* Dump and restore always worked well in the past for defragmenting a disk. * The real question is why you would want to do this on a BSD system * (assuming it is 4.2 or greater). For the benefit of the poster of the original question: BSD 4.2's fast file system uses a disk management scheme that keeps disk transfer rates near constant over time (not sensitive to fragmentation through use). 4.2 BSD's throughput rates are dependent, instead, on the total amount of free space, which must not be allowed to drop below a certain threshold. -Mike Marshall hubcap@hubcap.clemson.edu ...!hubcap!hubcap
breck@aimt.UUCP (Robert Breckinridge Beatie) (03/08/88)
In article <305@marconi.SW.MCC.COM>, knutson@marconi.SW.MCC.COM (Jim Knutson) writes: > The real question is why you would want to do this on a BSD system Mostly because the BSD FFS doesn't do a perfect job of preventing file system fragmentation. If you remove as well as create files, or if files grow, then you're going to get fragmented files. Maybe the degree of fragmentation is kept small enough that compressing the file system doesn't get you anything back. But that's for him to decide. -- Breck Beatie {uunet,ames!coherent}!aimt!breck "Sloppy as hell Little Father. You've embarassed me no end."
maxwell@ablnc.ATT.COM (Robert Maxwell) (03/08/88)
I also e-mailed this, but for SYSV, the dcopy(1m) is the command specifically intended for disk reorganization. It not only clears fragmentation, but put sub-directories as the first entries in directories to speed up path searches. Cpio (after making a new fs will do a fair amount of cleanup, but dcopy works better. -- ----------------------------------------------------------------------------- R. M. Maxwell AT&T DP&CT | I speak for nobody- Maitland, FL ablnc!delilah!bob | not even myself. -----------------------------------------------------------------------------
paddock@mybest.UUCP (Steve Paddock) (03/09/88)
In article <305@marconi.SW.MCC.COM> knutson@marconi.UUCP (Jim Knutson) writes: >(assuming it is 4.2 or greater). For AT&T System X, try your favorite >method of backup and restore (cpio I suppose). Mightn't fsck -s or mkfs between the backup and restore be helpful on SysV? -- Steve Paddock (uunet!bigtex!mybest!paddock) 4015 Burnet Road, Austin, Texas 78756
Paul_Steven_Mahler@cup.portal.com (03/09/88)
System V provides a dcopy utility for de-fragmenting a disc. You may also use fsck to clean up the inode table. sun!plato!paul
Paul_Steven_Mahler@cup.portal.com (03/09/88)
Aim Technology (415) 856-8649 sells a set of utilities that Gene Dronick wrote which re-organize discs.
daveg@pwcs.StPaul.GOV (Dave Glowacki) (03/11/88)
In article <1097@hubcap.UUCP> hubcap@hubcap.UUCP (Mike Marshall) writes: >For the benefit of the poster of the original question: BSD 4.2's fast file >system uses a disk management scheme that keeps disk transfer rates near >constant over time (not sensitive to fragmentation through use). 4.2 BSD's >throughput rates are dependent, instead, on the total amount of free space, >which must not be allowed to drop below a certain threshold. What is this threshold? Doing a 'df' shows that the system reserves 10% of each partition, since the amounts in the used and available columns only add up to 90% of the total blocks in each partition. My boss maintains that 10% of the AVAILABLE blocks must be kept free, leaving us with only about 81% of the total disk space. I think that the system's already got the space it needs. Could someone PLEASE tell me I'm right, so we can get back all that wasted space? (9% of 3 Fuji Eagles) -- Dave Glowacki daveg@pwcs.StPaul.GOV ...!amdahl!ems!pwcs!daveg Disclaimer: Society's to blame.
aglew@ccvaxa.UUCP (03/11/88)
>For the benefit of the poster of the original question: BSD 4.2's fast file >system uses a disk management scheme that keeps disk transfer rates near >constant over time (not sensitive to fragmentation through use). 4.2 BSD's >throughput rates are dependent, instead, on the total amount of free space, >which must not be allowed to drop below a certain threshold. > >-Mike Marshall hubcap@hubcap.clemson.edu ...!hubcap!hubcap Disk thruput maybe, file thruput no. Lots of activity on a nearly full disk can result in a file spread across several cylinders, because there wasn't room on a single cylinder when it was created, although there may be now. Perhaps the term "fragmentation" is inappropriate.
mjy@sdti.UUCP (Michael J. Young) (03/11/88)
In article <3755@cup.portal.com> Paul_Steven_Mahler@cup.portal.com writes: >Aim Technology (415) 856-8649 sells a set of utilities that Gene >Dronick wrote which re-organize discs. Has anyone used these? Do they perform in-place reorganization, or do they work like dcopy? The biggest headache with dcopy is the requirement for a second volume. I typically reorganize my disks whenever they become more than 25-30% fragmented. Usually, I just do a mkfs and restore after a normal weekly backup. It's not quite as good as dcopy, but it's close enough, and still results in a big throughput improvement. -- Mike Young - Software Development Technologies, Inc., Sudbury MA 01776 UUCP : {decvax,harvard,linus,mit-eddie}!necntc!necis!mrst!sdti!mjy Internet : mjy%sdti.uucp@harvard.harvard.edu Tel: +1 617 443 5779
barnett@vdsvax.steinmetz.ge.com (Bruce G. Barnett) (03/14/88)
In article <29500023@ccvaxa> aglew@ccvaxa.UUCP writes: [Discussion on the Berkeley fast file system] |Disk thruput maybe, file thruput no. Lots of activity on a nearly full |disk can result in a file spread across several cylinders, because there |wasn't room on a single cylinder when it was created, although there may |be now. | Perhaps the term "fragmentation" is inappropriate. As I recall, whenever a 'mkdir' is issued, the system finds the largest cylinder group it can. Therefore the best access can be achieved by putting a large number of files in a new directory. That is the theory - anyway. Are there any tricks to keep your Berkeley file system up to snuff? I remember some non-unix operating systems suggesting you put the most frequently used files on first. -- Bruce G. Barnett <barnett@ge-crd.ARPA> <barnett@steinmetz.UUCP> uunet!steinmetz!barnett
mck@hpdstma.HP.COM (Doug Mckenzie) (03/16/88)
>Doing a 'df' shows that the system reserves 10% of each partition, since >the amounts in the used and available columns only add up to 90% of the >total blocks in each partition. My boss maintains that 10% of the >AVAILABLE blocks must be kept free, leaving us with only about 81% of the >total disk space. I think that the system's already got the space it needs. > >Could someone PLEASE tell me I'm right, so we can get back all that wasted >space? (9% of 3 Fuji Eagles) Using 90% of the (total) disk blocks is a good tradeoff between disk space and block allocation. That's why it's the default (on HP's HP-UX). The idea is: if there's lots of free disk space, you can get the block you ask for, as less and less space is available, you have to go through ever more brute search methods to find a block. At 90%, search by hashing cylinder group numbers and finally linear searching start to predominate. mck
allbery@ncoast.UUCP (Brandon Allbery) (03/16/88)
As quoted from <305@marconi.SW.MCC.COM> by knutson@marconi.SW.MCC.COM (Jim Knutson): +--------------- | Dump and restore always worked well in the past for defragmenting a disk. | The real question is why you would want to do this on a BSD system | (assuming it is 4.2 or greater). For AT&T System X, try your favorite | method of backup and restore (cpio I suppose). +--------------- Under System V, the way to do it is dcopy: it defragments the disk, spreads free blocks evenly over the disk to slow the effect of further fragmentation, sorts directories to place subdirectories first and thereby speed pathname accesses, etc. -- Brandon S. Allbery, moderator of comp.sources.misc {well!hoptoad,uunet!hnsurg3,cbosgd,sun!mandrill}!ncoast!allbery
jk@Apple.COM (John Kullmann) (03/16/88)
I am not recommending it (yet) because I haven't used it by AIM offers FSTUNE which allows you to selectively compress unix files systems in place (some or all files...). Sounds like a great idea. -- John Kullmann Apple Computer Inc. Voice: 408-973-2939 Fax: 408-973-6489