[net.unix] FSLS - List big files in file system

wedgingt@udenva.UUCP (Will Edgington/Ejeo) (09/24/86)

I have cross-posted this to net.unix, as that's really where this belongs now.

In article <1620002@hpcnoe.UUCP> jason@hpcnoe.UUCP (Jason Zions) writes:
>> >> find / -size +nnn -exec ls -l {} \;
>> >
>> >You can bury the machine this way!!!!  It's incredibly more efficient
>> >using xargs(1) instead.
>> >
>> >	find / -size +nnn -print | xargs ls -l
>> 
>> And for those of you who don't have xargs (I thought such creatures
>> died after v6!), you can do:
>> 
>> 	ls -l `find / +nnn -print`
>
>What happend if the find produces more characters than can fit in a command
>line? Things don't work so well, the shell moans about "command line too long"
>or "too many arguments" or some such mumble. That's why xargs still exists in
>System V (well, at least HP-UX, HP's implementation of System V); it uses the
>smallest number of command invocations to get them all done.
>
>It's amazing how much faster shell scripts run using xargs (as opposed to
>-exec cmd {} \; in a find command); my disk drive doesn't walk across the room
>from all the fork/execs...

  If I remember right, the original discussion was trying to find the huge
(10 megabyte plus) file that just swallowed your disk.  I doubt there's more
than one; the first solution above will therefore be *faster* than using xargs
or command substitution (`command`), and it will print it out RIGHT WHEN IT'S
FOUND.  After the one ls that needed to be forked prints it, you can interrupt
the find and remove the offending file.  Command substitution won't even start
the ls until after the find has EXITED; xargs will only start the ls after
it's filled a command line or find has exited.  When you're looking for one
file, it had better not fill a command line, or you're in trouble (though I
had a student consultant start an infinite 'mkdir dir; cd dir' loop on a 2.9
BSD system once ...  the unlink call tries to trace it all the way back to
the root of the file system and fails; you have to clear the top inode ...) ...
  Now, if you're trying to find and remove the thousands of files someone
created that left your inode table empty ....  But that's never happened on
any of the systems here, despite the "know-it-all-but-let's-experiment"
student consultants we have (I know; I started as one :-).
-- 
------------------------------------------------------------------------
|"Input!"|"Number 5 is Alive!!"|"Beautiful software"|"Not disassemble!!"
------------------------------------------------------------------------
Will Edgington|Computing and Information Resources, University of Denver
(303) 871-2081|BusAd 469, 2020 S. Race, Denver CO 80208 (work)|"Input!!"
(303) 759-9071|5001 E. Iliff, Denver CO 80222 (home)|"Where's Input?!!?"
{{hplabs,seismo}!hao,ucbvax!nbires,boulder,cires,cisden}!udenva!wedgingt
COMING SOON: wedgingt@eos.cair.du.edu, wedgingt@eos.BITNET ( == udenva )

wcs@ho95e.UUCP (#Bill_Stewart) (09/26/86)

In article <1986@udenva.UUCP> wedgingt@udenva.UUCP (Will Edgington/Ejeo) writes:
.......<discussion of find ....... -print | xargs foo     vs
	find ..... -exec foo {} \;
........
>  If I remember right, the original discussion was trying to find the huge
>(10 megabyte plus) file that just swallowed your disk.  I doubt there's more
>than one; the first solution above will therefore be *faster* than using xargs
>.....
>  Now, if you're trying to find and remove the thousands of files someone
>created that left your inode table empty ....  But that's never happened on
>any of the systems here, despite the "know-it-all-but-let's-experiment"
>student consultants we have (I know; I started as one :-).

Just by coincidence, my /usr/spool disk ran out of inodes yesterday.  Of the
2000 files that had arrived in the last find -m -2 days, 93% were netnews.....
and it's not the first time.   (Yes, sometime I'll get around to mkfs'ing with
more inodes, but....)
-- 
# Bill Stewart, AT&T Bell Labs 2G-202, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs