mc68020@nonvon.UUCP (07/09/87)
I am frustrated as all hell! At least on the two versions of UNIX with which I have direct experience, the directory management is all fu**ed up! After a file is removed, it's "slot" in the directory isn't re-used! The damned directory keeps growing larger and LARGER. To make matters worse, there appears to be no rational way to write a C program to "compact" the directory, leaving us with the highly undesirable chore of MVing everything OUT of the damned directory, rmdir'ing, re-mkdir'ing it, and moving everything back in again. This wastes LOTS of time on the system, not to mention the operator/sysadmin's time. QUERY: Why this stupidity in the first place, and **WHY** hasn't AT&T or BERSERKELEY ***DONE*** something to fix it??????? I mean really, it is a trivial matter to identify an rm-ed entry in a directory. Either allow the directory management system to use the first available slot, creating a new slot only if necessary, or develop some mechanism for rationally compacting these messes from time to time. Can those who are in positions of knowledge please explain, without condescension and rudery, what the story is here, please? Am I mistaken about the way directories are arranged, about identifying rm-ed entries? INformation, please!
guy@gorodish.UUCP (07/09/87)
> Am I mistaken about the way directories are arranged, about identifying > rm-ed entries? Yes. In the V7 file system, as used by most UNIX versions, directory entries are all the same size, and it's trivial for the OS to reuse the slots formerly occupied by entries that have been freed. In fact, it does so. Either your vendor has screwed up royally - which is extremely unlikely, since few people dink with that code - or you're misinterpreting something. In the 4.2BSD file system, directory entries are not the same size, but the OS still reuses the space occupied by freed entries as best it can; it will compact directory blocks as needed, shuffling entries to make discontiguous unused areas contiguous. Now, if you fill up a directory with lots of files and then delete the files, in most versions of UNIX the directory will still be the same size, although most of the space will be free. In 4.3BSD, the OS will shrink the directory file under certain circumstances. Guy Harris {ihnp4, decvax, seismo, decwrl, ...}!sun!guy guy@sun.com
gwyn@brl-smoke.ARPA (Doug Gwyn ) (07/10/87)
In article <603@nonvon.UUCP> mc68020@nonvon.UUCP (root) writes:
-... Either
-allow the directory management system to use the first available slot,
-creating a new slot only if necessary, or develop some mechanism for
-rationally compacting these messes from time to time.
That's just what all the UNIXy systems I know of do. I have no idea
what may be wrong with yours.
ark@alice.UUCP (07/10/87)
In article <603@nonvon.UUCP>, mc68020@nonvon.UUCP writes: > After a file is removed, it's "slot" in the directory isn't re-used! The > damned directory keeps growing larger and LARGER. Not quite true. The following applies, as far as I know, to all versions of the UNIX system except Berkeley 4.2 and 4.3. I don't know the situation for those systems. Directories have the convention that an inode number of 0 means the directory entry is available, regardless of whether there's a name in it or not. When you remove a link to a file, the inode number in that directory is zeroed but the name stays around. When creating a new directory entry, the system uses the first available slot, where "available" means "inode == 0" It is true that a directory can never shrink. However, the size of a directory will never be greater than the maximum number of entries that directory has ever contained.
whb@vax135.UUCP (Wilson H. Bent) (07/17/87)
In article <23047@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes: >Now, if you fill up a directory with lots of files and then delete >the files, in most versions of UNIX the directory will still be the >same size, although most of the space will be free. In 4.3BSD, the >OS will shrink the directory file under certain circumstances. Can someone go into more detail on this feature? This is the first time I've ever heard of any OS which was willing and able to compact the directory. The method I use to 'shrink' a directory with lost of unused slots (either 4.2BSD or SysV) is to to the every-popular cpio move: cd (parent of jumbo) find jumbo -depth -print | cpio -pdlm teensy rm -r jumbo mv teensy jumbo # just to get the names right! Of course, I've yet to find a BSD find which understands "-depth"... I don't usually do this, even on greatly dynamic directories such as /usr/spool/lpr - the benifits aren't all that great. -- Wilson H. Bent, Jr. ... ihnp4!hoh-2!whb AT&T - Bell Laboratories (201) 949-1277 Disclaimer: My company has not authorized me to issue a disclaimer.
aglew@ccvaxa.UUCP (07/18/87)
...> Reusing directory slots for unlinked files. It's a bit off the subject, but one of the nicest things about working on an IBM PC was that you could "unrm" a file. When a file was removed, the first character of its name in the directory entry was zorched, but the rest of the information was untouched, so you could easily reconstruct it if you got there before its disk blocks were reused. Of course, you can do the same thing in UNIX if you know a file's inode (if the inode wasn't zeroed when blocks were freed), but you lose the connection between directory entry and inode when you unlink. And how many of us have an old "ls -i" output lying around. Some people alias rm to mv..., but that has obvious limitations (especially if you use rm -r a lot). What you could do would be NOT to zrch the inode in the directory entry, but to add a deleted bit. Deleted directory entries would be skipped on search, and reused as necessary, but would have the inode number. It might be nice to have an inode version number in the dir entry and the inode, that is incremented whenever a new inode is allocated. Compression would be a bit of a problem - you would want to leave a window between unlink and directory compression, so that careless people like me might have a chance to unrm. Maybe done by a daemon, or on demand.
ken@rochester.arpa (Ken Yap) (07/20/87)
|What you could do would be NOT to zrch the inode in the directory entry, |but to add a deleted bit. Deleted directory entries would be skipped |on search, and reused as necessary, but would have the inode number. |It might be nice to have an inode version number in the dir entry and |the inode, that is incremented whenever a new inode is allocated. | Compression would be a bit of a problem - you would want to |leave a window between unlink and directory compression, so that |careless people like me might have a chance to unrm. Maybe done |by a daemon, or on demand. You don't need to modify the kernel to do this. You could rename the file to have an "invisible" prefix, like # or ... Of course, this way people tend to forget they are accumulating crud. Ken
greywolf@unisoft.UUCP (Roan Jon Anderson) (07/22/87)
Sender:unisoft!greywolf (The Grey Wolf) In article <603@nonvon.UUCP> mc68020@nonvon.UUCP (root) writes: > > I am frustrated as all hell! At least on the two versions of UNIX with >which I have direct experience, the directory management is all fu**ed up! > > After a file is removed, it's "slot" in the directory isn't re-used! The >damned directory keeps growing larger and LARGER. To make matters worse, >there appears to be no rational way to write a C program to "compact" >the directory, leaving us with the highly undesirable chore of MVing >everything OUT of the damned directory, rmdir'ing, re-mkdir'ing it, and >moving everything back in again. This wastes LOTS of time on the system, >not to mention the operator/sysadmin's time. > > QUERY: Why this stupidity in the first place, and **WHY** hasn't >AT&T or BERSERKELEY ***DONE*** something to fix it??????? I mean really, >it is a trivial matter to identify an rm-ed entry in a directory. Either >allow the directory management system to use the first available slot, >creating a new slot only if necessary, or develop some mechanism for >rationally compacting these messes from time to time. > > Can those who are in positions of knowledge please explain, without >condescension and rudery, what the story is here, please? Am I mistaken >about the way directories are arranged, about identifying rm-ed entries? > > INformation, please! ------ Well, not sure about the actual re-use of a slot, but you only have to remove one directory. Try this, and see how long it takes...it shouldn't take long... #! /bin/sh - while $# :; do /bin/mkdir foo /bin/mv $1/* $1/.??* foo /bin/rmdir $1 /bin/mv foo $1 done The time it takes should be minimal, unless you have exceedingly large directories, in which case it might take longer... -------------------------------------------------------------------------------- Faster than pure assembly code... More powerful than kill -9... able to unmount busy file systems in a single call... ______ it's a (long) word... / | | \ it's a (back)plane... \=#==#=/ \| |/ it's... \==/ \/ SUPERUSER!!!!
aglew@ccvaxa.UUCP (07/22/87)
>|but to add a deleted bit. Deleted directory entries would be skipped >|on search, and reused as necessary, but would have the inode number. >|It might be nice to have an inode version number in the dir entry and >|the inode, that is incremented whenever a new inode is allocated. >| Compression would be a bit of a problem - you would want to >|leave a window between unlink and directory compression, so that >|careless people like me might have a chance to unrm. Maybe done >|by a daemon, or on demand. > >You don't need to modify the kernel to do this. You could rename the >file to have an "invisible" prefix, like # or ... Of course, this way >people tend to forget they are accumulating crud. > > Ken Not equivalent. Renaming to an invisible prefix leaves the file around; the intent was to actually remove the file, so that the kernel can reuse the blocks, but to leave enough information so that you can go and reconstruct the file immediately after deletion. Renaming to an invisible prefix WOULD be equivalent if, whenever the kernel ran out of space on a filesystem, it would go and look for this files beginning with this invisible prefix to delete. Ie. if the kernel could invoke "find ... -name '...*' -exec rm {} \;" whenever it ran out of space. An insufficient-file-space-hook might be a good idea, especially on not-quite standard UNIX files where compress options might be meaningful. Or on contiguous filesystems. aglew@mycroft.gould.com
ken@rochester.arpa (Ken Yap) (07/22/87)
|Not equivalent. Renaming to an invisible prefix leaves the file around; |the intent was to actually remove the file, so that the kernel can reuse |the blocks, but to leave enough information so that you can go and reconstruct |the file immediately after deletion. Yes, but how long is long enough? If the filesystem is under heavy use the freed blocks may be put to use again right away and this recovery scheme won't work some of the time, probably just when you *really* need it. Unless you want to implement a delete-purge scheme like in some other operating systems, my preference would be to leave such schemes out of the kernel. If one wants a file badly enough one should be willing to pay for the storage until one is sure it isn't needed anymore. Ken
bzs@bu-cs.bu.EDU (Barry Shein) (07/22/87)
Note that TWENEX had a nice undelete built into the O/S and had solutions to a number of the policy problems (eg. when to *really* delete and free file space) that are at least worth reviewing before attempting a design. I believe file space was reclaimed under three conditions: The user issuing an explicit EXPUNGE (was that it?) command, logging out of that directory or the Grim File Reaper running (a daemon, usually either touched off manually by an operator or based on something like only 10% disk free, with some broadcast warnings, like a shutdown.) The problem in Unix with this is not how to represent an undelete-able file (sure, a bit in the directory entry sounds plausible, or whatever) but how to sweep. You can't put the delete bit into the inode because it should probably only be on a per link (even when link count == 1) basis. If it's in the directory entries then you have to descend and sweep the entire tree, non-trivial on larger machines with gigabytes of disk. My experience with TWENEX was that operator or daemon initiated file purges were initiated several times per day on some systems (several times per minute on OZ some nights :-) You just kinda fall into these bad habits real quick. What I'm saying is it might take some low-level re-design, like allocating directory blocks into known contiguous places on the disk so they can be swept by a low-level utility quickly, similar to the inode area but, hopefully, dynamically extendable. This might be useful for other things also, I haven't thought much about it. It also will add some hair to the file system call interface which needs to be anticipated, like a flag to unlink() indicating whether to really delete or not and probably some thought about what to do about trying to name (rename(), creat(), mkdir()) a file the same as a currently undelete-able file, TWENEX used generation numbers which is a whole 'nother design problem that perhaps should be simultaneously broached (eg. foo.bar.1, foo.bar.2, the numbers automatically generated, opens not specifying a number [open("foo.bar",0)] get the highest un-undeleted version, etc.) -Barry Shein, Boston University
ken@cs.rochester.EDU (Ken Yap) (07/22/87)
I have no problems with a properly implemented 3 stage delete. What I think is unsatisfactory is a partly implemented undelete that only works some of the time. Imagine this: Mr. Whiz Consultant: And you can recover your files by issuing this command unrm in case you accidentally do a rm. Prof. Big Grant: That is neat. I'll remember that. Two days later... Prof. BG: Uh, help! I deleted a file last night without realizing it and when I issued an unrm this morning I got this message: unrm: blocks unavailable. What does it mean? I really need my grant proposal back! I have to mail it today! Mr. WC: Well, it is documented in relink(2) that sometimes the blocks cannot recovered and... Prof. BG: !#!$@&**&*==! Ken
bzs@bu-cs.bu.EDU (Barry Shein) (07/22/87)
Yes, I don't think anything that's a hack (eg. scrounging around for the blocks) is at all acceptable (except perhaps as an emergency utility for sysmanglers, in a similar spirit to clri, even that is of almost no utility unless the system is halted the moment a file is accidently deleted, it borders on institutionalized lunacy for a time-sharing system [eg. how many files will be lost when you halt?].) It really has to be something like mark the file name so it becomes invisible on 'rm' and unmark it on 'unrm', the "file" (ie. inode and blocks) is really, really still there, it's just the name which has become invisible (oops, invisible files is a whole different but related topic :-) as far as the user is concerned. It shouldn't even be wholly invisible, I would certainly want to be able to ask 'ls' to list all undelete-able file names. Sometimes it takes some user interface magic to make this correctly accepted in the user's mind (oh, like the 'ls -D' command refusing to list anything but undelete-able files, clearly segmenting them visually.) There's really a lot of thought that's needed. Here's another...do you back up deleted files? But what if the system goes down just as they were about to legally undelete and you would have had it on the backup that finished ten minutes ago? Assume your goal in life is not to save mag tape or punish users for their foibles but to provide a reliable system. In fact, even in a "real" implementation your point still stands. People do expect that the ability to undelete means whatever is most convenient for them no matter what you tell them (I ran the TWENEX system here.) They'll delete and delete, discover they're over file quota limit because of the deleted file space, expunge to free the space, work some more, then try to undelete. If you're *lucky* they'll admit they expunged, most will stand there dumbly on the assumption that if you think the system did it to them you'll work a little harder to get them their file back (unfortunately that's often true, we're only human also.) -B
pdg@ihdev.ATT.COM (Joe Isuzu) (07/23/87)
It seems that the general concensus is to start adding stuff to the
kernal to improve directory management (deleted, invisible files as in
TOPS-20). Many users simply have aliases set up for rm to actually
move the files to a .deleted directory or some other such scheme ehere
they can be unrm'd, expunged or cleaned up by an operator driven
daemon, just like in TWENEX. This was the general scheme at one
institution I attended, which was primarily TOPS-20 based, but was
switching to smaller UNIX systems. The advantages here were that
naive users had this ability more or less transparently, and system
utilities worked as they always did with no modifications. The
disadvantage was that system utilities worked as they always did :-).
Try explaining to a new UNIX user, "well the first time it was removed
with rm which doesn't really remove it, although the *real* rm *does*
really remove it, you used rmdir which is a different dog
altogether.". It seems though that this ability (to be able to
undelete a file) needs to be *very* closely related with generation
numbers, so you dont rm a file, try to create a new version and get
told 'file already exists' - assuming the 'removed' files are kept in
the existing directory.
I can see it now "But I just removed it
(whiney voice)". I'd be interested in hearing how the GNU folks are
resolving this issue. I think if you really were trying to put this
in the kernal properly, the file system switch would be the only way
to go (so existing applications which do much file creation/deletion
would not be affected by the additional overhead/disk usage). Between
the FSS and getdents it would not seem to difficult to set up all of
this more or less transparently (open opens the most recent version
for read, increments the gen count and opens a new file for write,
getdents gives only one version, and have ioctls on directories to do
expunge, and an alternate way of opening a directory (read another
name - like 'dirname@') to get all entries. Anyway, just some random
thoughts before this mornings first cafeine has hit my brain.
Now, invisible files (a la TWENEX). Did anybody really use these? I
always had a PCL-EXEC (wasn't that an *incredible* exec - except for
the parse until get a syntax error, say 'errror on line x' and bomb
compilation scheme) function to have dir get invisible files too. I
really don't think these were too useful, but for the same affect on
UNIX,
function make-invis {
mv $1 .$1
}
(half a :-}) (gosh GNUmacs really should recognize smiley faces and
not give you `mismatched parenthesis' errors)
--
Paul Guthrie "Another day, another Jaguar"
ihnp4!ihdev!pdg -- Pat Sajak
root@hobbes.UUCP (John Plocher) (07/23/87)
Didn't this go round about 8 months ago? The solutions given then seem to be usable now, too. From memory [so be warned that there ARE typos and whathaveyou here], here is /usr/local/rm: #! /bin/sh if [ ! -d .kill ] then mkdir .kill fi mv $* .kill also /usr/local/unrm: #! /bin/sh if [ ! -d .kill ] then echo "There are no \"removed\" files to restore from" else # DOES NOT HANDLE WILDCARDS or more than 1 arg if [ ! -r .kill/$1 ] then echo "Sorry, the file " $1 "does not exist any more" else mv .kill/$1 . fi fi in a user's .logout put the command to clean out all .kill files: find $HOME -type d -name .kill -exec /bin/rm -fr {} \; Then state that files can be unrm'd UNTIL the user logs out, and not any later than that. This is normal behavior on many systems that use temp files which go poof when the user is done, and seems to follow the rule of least astonishment. -- John Plocher uwvax!geowhiz!uwspan!plocher plocher%uwspan.UUCP@uwvax.CS.WISC.EDU
ken@rochester.arpa (Ken Yap) (07/24/87)
No, I'm sorry, count me out of the "general consensus" to add support for undeleting files in Unix. I think the kernel is bloated enough as it is and this is a feature that can be moved into user space. What I did say was that if you are going to add this feature, then do it properly. Half baked hacks that scavenge the free list don't cut it. Ken
hedrick@topaz.rutgers.edu (Charles Hedrick) (07/24/87)
pdg@ihdev.uucp mentioned that he couldn't think of any use for invisible files on TOPS-20. Invisible files were added for the archive system. You want to keep information about the file in the directory, so that you can do directories of all archived files, move files back and forth between online and offline status etc. On the other hand, when you do "dir" you don't want to see all your 5-year-old junk. So the compromise was that when you archive a file, it stays in the directory, but is set to be invisible. Unix also has invisible files, which are used for similar reasons. They are names beginning with a dot. On TOPS-20, invisibility is just a bit in the fdb (inode), like deleted. gtjfn (open) has an extra option to control whether it is allowed to open invisible files. The only use I know of for invisible files outside the archive system is scaring users. If somebody leaves a job logged in in a public room, people sometimes set all their files invisible. (At less friendly installations, people are known to delete them.)
allbery@ncoast.UUCP (Brandon Allbery) (07/29/87)
As quoted from <746@sol.ARPA> by ken@rochester.arpa (Ken Yap): +--------------- | What I did say was that if you are going to add this feature, then do | it properly. Half baked hacks that scavenge the free list don't cut it. +--------------- A correct *kernel* method should probably start with the file-generations technique I posted a week ago. However, what's wrong with rm(3) and unrm(3)? Leave unlink() as is, and add *functions* to implement undeleteable files. These can then be used by most programs while leaving unlink() as is. Why force the kernel to do what user code can -- isn't that the basis for UN*X having < 100 system calls where many large operating systems have > 1000 (I'm thinking TOPS-20 specifically)? -- Brandon S. Allbery, moderator of comp.sources.misc and comp.binaries.ibm.pc {{harvard,mit-eddie}!necntc,well!hoptoad,sun!cwruecmp!hal}!ncoast!allbery ARPA: necntc!ncoast!allbery@harvard.harvard.edu Fido: 157/502 MCI: BALLBERY <<ncoast Public Access UNIX: +1 216 781 6201 24hrs. 300/1200/2400 baud>>
allbery@ncoast.UUCP (Brandon Allbery) (07/29/87)
As quoted from <13497@topaz.rutgers.edu> by hedrick@topaz.rutgers.edu (Charles Hedrick): +--------------- | pdg@ihdev.uucp mentioned that he couldn't think of any use for | invisible files on TOPS-20. Invisible files were added for the >... | control whether it is allowed to open invisible files. The only use I | know of for invisible files outside the archive system is scaring | users. If somebody leaves a job logged in in a public room, people | sometimes set all their files invisible. (At less friendly +--------------- I protected myself on CWRU20 with a CMD file which popped a message onto the terminal and then went to sleep until I hit a particular key, at which point it prompted for a password which would unlock the terminal. This actually had mnemonic value, also: long printer check? type TAKE 5 and leave! -- Brandon S. Allbery, moderator of comp.sources.misc and comp.binaries.ibm.pc {{harvard,mit-eddie}!necntc,well!hoptoad,sun!cwruecmp!hal}!ncoast!allbery ARPA: necntc!ncoast!allbery@harvard.harvard.edu Fido: 157/502 MCI: BALLBERY <<ncoast Public Access UNIX: +1 216 781 6201 24hrs. 300/1200/2400 baud>>
allbery@ncoast.UUCP (Brandon Allbery) (07/29/87)
As quoted from <156@hobbes.UUCP> by root@hobbes.UUCP (John Plocher): +--------------- | Didn't this go round about 8 months ago? The solutions given then seem | to be usable now, too. +--------------- This has one potential problem, also mentioned as a bug in the adventure shell: what if you delete two files in two different directories that have the same basename? If I rm ~/News/KILL followed by ~/News/news/admin/KILL, then try to unrm ~/News/KILL, I'll get ~/news/admin/KILL instead. Alternatively, say we have two links (/usr/lib/uucp/palias.dir and /usr/local/lib/elm/palias.dir) to a large file, I rm one of them and it ends up in /u/allbery/.kill... on a different file system. Not only is this likely to overwhelm /u, but if I unrm it the linkedness of the file is lost and /usr is likely to overflow as well. -- Brandon S. Allbery, moderator of comp.sources.misc and comp.binaries.ibm.pc {{harvard,mit-eddie}!necntc,well!hoptoad,sun!cwruecmp!hal}!ncoast!allbery ARPA: necntc!ncoast!allbery@harvard.harvard.edu Fido: 157/502 MCI: BALLBERY <<ncoast Public Access UNIX: +1 216 781 6201 24hrs. 300/1200/2400 baud>>
mouse@mcgill-vision.UUCP (der Mouse) (08/03/87)
In article <1826@vax135.UUCP>, whb@vax135.UUCP (Wilson H. Bent) writes: > In article <23047@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes: >> [stuff about filling up directories and then deleting files] > The method I use to 'shrink' a directory [is] > cd (parent of jumbo) > find jumbo -depth -print | cpio -pdlm teensy > rm -r jumbo > mv teensy jumbo # just to get the names right! > Of course, I've yet to find a BSD find which understands "-depth"... Great poslfit, what's wrong with cd (parent of jumbo) mkdir teensy mv jumbo/* teensy rmdir jumbo mv teensy jumbo which works durn near everywhere? If you have dot files in jumbo you may have to add mv jumbo/.??* teensy (though that would miss files matching .?). If want to be really safe, you could do ls -f -1 jumbo | sed -e '1,2d' | sed -e 's;.*;mv jumbo/& teensy;' | sh though that strikes me as probably being overkill. der Mouse (mouse@mcgill-vision.uucp)
sa@ttidca.TTI.COM (Steve Alter) (08/11/87)
Following is a quick list of some of the recently posted methods for shrinking a large and mostly empty directory: > find jumbo -depth -print | cpio -pdlm teensy > mv jumbo/* teensy ; mv jumbo/.??* teensy > ls -f -1 jumbo | sed -e '1,2d' | sed -e 's;.*;mv jumbo/& teensy;' | sh (Note that each of the above three must be followed by this:) > rm -r jumbo ; mv teensy jumbo Now the problem with all of these is that the new directory has no guarantee that its mode/ownerships will match the old one! What is needed is a general "cpmod" program which would copy modes and/or ownerships and/or times from one file (or directory) to another. If this already exists, could someone let me know which newsgroup it was in? (I think I'll cross-post to comp.sources.wanted) -- Steve Alter ...!{csun,trwrb,psivax}!ttidca!alter or alter@tti.com Citicorp/TTI, Santa Monica CA (213) 452-9191 x2541
root@hobbes.UUCP (John Plocher) (08/14/87)
Steve Alter writes the following in article <1091@ttidca.TTI.COM> ---- | > find jumbo -depth -print | cpio -pdlm teensy | > mv jumbo/* teensy ; mv jumbo/.??* teensy | > ls -f -1 jumbo | sed -e '1,2d' | sed -e 's;.*;mv jumbo/& teensy;' | sh | (Note that each of the above three must be followed by this:) | > rm -r jumbo ; mv teensy jumbo | Now the problem with all of these is that the new directory has no | guarantee that its mode/ownerships will match the old one! +---- I usually use something like the following: find jumbo -depth -print | cpio -o > /tmp/hold_this rm -fr jumbo cpio -ivduma < /tmp/hold_this -- John Plocher uwvax!geowhiz!uwspan!plocher plocher%uwspan.UUCP@uwvax.CS.WISC.EDU
howie@cunixc.columbia.edu (Howie Kaye) (08/17/87)
How about tar cf - jumbo | ( cd tiny; tar xfp - ) the 'p' switch to tar should preserve ownerships and dates. ------------------------------------------------------------------------ Howie Kaye howie@columbia.edu Columbia University hkaus@cuvma.bitnet Systems Group ...!seismo!columbia!howie
rbj@icst-cmr.arpa (Root Boy Jim) (08/29/87)
From: der Mouse <mouse@mcgill-vision.uucp> Great poslfit, what's wrong with cd (parent of jumbo) mkdir teensy mv jumbo/* teensy rmdir jumbo mv teensy jumbo which works durn near everywhere? Does it really? I thought TPC had problems moving directory names outside of their parent directory, or have they fixed that already? From: Steve Alter <sa@ttidca.TTI.COM> Now the problem with all of these is that the new directory has no guarantee that its mode/ownerships will match the old one! Good point. How about: mkdir teeny mv jumbo/* teeny mv teeny/* jumbo rmdir teeny With the same caveats about the `dotfiles' that der Mouse mentioned. (Root Boy) Jim Cottrell <rbj@icst-cmr.arpa> National Bureau of Standards Flamer's Hotline: (301) 975-5688
mouse@mcgill-vision.UUCP (der Mouse) (09/11/87)
In article <9054@brl-adm.ARPA>, rbj@icst-cmr.arpa (Root Boy Jim) writes: > From: der Mouse <mouse@mcgill-vision.uucp> >> mkdir teensy >> mv jumbo/* teensy >> rmdir jumbo >> mv teensy jumbo >> which works durn near everywhere? > Does it really? I thought TPC had problems moving directory names > outside of their parent directory, or have they fixed that already? Apparently they do. For once (:-) I posted without being certain of my facts (and got lots of mail about it, too). I just assumed that nobody would be perverse enough to build a mv incapable of such a natural, necessary operation. > From: Steve Alter <sa@ttidca.TTI.COM> >> Now the problem with all of these is that the new directory has no >> guarantee that its mode/ownerships will match the old one! For that matter, most of these assume that whoever is performing this whole operation has write permission in jumbo/..! > Good point. How about: > mkdir teeny > mv jumbo/* teeny > mv teeny/* jumbo > rmdir teeny > With the same caveats about the `dotfiles' that der Mouse mentioned. Wasn't the main point of this whole exercise to shrink jumbo (witness the names chosen)? Or was it to sort the names, which, of course, your suggestion does perfectly well? der Mouse (mouse@mcgill-vision.uucp)