[net.followup] More directory junk

smk (12/29/82)

	Sure you could make a shell file for uusquish, but for 2
reasons, it's not a good idea for squish:
1.	More checking is done in the C code and 
2.	The squish implementation is more flexible/faster.

sjb (12/29/82)

Well, what's the basic premise of UNIX?  It's to provide an operating
system whereby small, powerful utilities can be grouped together to
form programs/routines/what-have-you of their own.  If we have to write
a separate C program for every job rather than using existing utilities
the way they're supposed to use, we defeat this purpose.  If the reason
for this is because the C programs are faster, more reliable, etc., it
just tells me that we have to make the smaller programs faster and more
reliable.  Otherwise, we're going to be reinventing the wheel forever.

mark (12/29/82)

A few notes are in order.

Many systems do not have cpio, since it's a USG program.  4.1BSD
and V7 do not have it, but have tar instead.  Back to back tars
can be used to copy heirarchies in place of a find|cpio.  tar is
also present on USG systems.

Methods that compact directories such as /usr/spool/uucp by moving
the directory out of place, copying the heirarchy into a fresh
directory, and removing the old copy will fail if the filesystem is
nearly full and the spool directory is large.

A simple method I often use by hand (I don't have a debugged shell
script) looks something like this:
	cd /usr/spool
	mv uucp ouucp
	mkdir uucp
	mv ouucp/* uucp
	rmdir ouucp
This is not recursive, but is simple, fast, and doesn't require much
scratch space.  If you have a USG system and the directory contains
subdirectories, it won't work since mv won't move directories.

barmar (12/30/82)

You can't do everything using pre-existing software.  In any case, there is no way
that shell scripts can ever catch up to compiled programs; shell
is a very powerful interpreter, and it goes through a great
deal before it ever gets to executing the requested program.
It has to set up pipes and I/O redirection, it has to search
directories for every command.  There is no way that shell can
compete, and if it is a large job that is run often, it does not
pay.

Note, however, that the shell script that started this discussion
was five lines long, and thus does not spend too much time in the shell.
However, there is also the situation where the tools that are
being used do much more than is necessary; in that case you end up wasting
computrons also, although it might be considered an indication that
the tool is not well-designed.
					barmar@mit-multics