[comp.unix.xenix] Request for human interface design anecdotes

caf@omen.UUCP (Chuck Forsberg WA7KGX) (10/27/87)

In article <1325@desint.UUCP> geoff@desint.UUCP (Geoff Kuenning) writes:
:> just create a file called "-i" in your directory that you want protected!
:> then "rm *" expands to "rm -i file1 file2 file3 ..."
:> (unless you have other files beginning with weird characters)
:
:What a typically Unix solution.  Even to the flaws:  you have to put up
:with an ugly file in your directory, and it doesn't work if you
:type "rm test *".

One other flaw that can be circumvented: it takes up an inode.
So, my "-i" has many links to it, it only takes up a directory slot
plus one inode total.

hunt@spar.SPAR.SLB.COM (Neil Hunt) (10/28/87)

In article <1325@desint.UUCP> geoff@desint.UUCP (Geoff Kuenning) writes:
> just create a file called "-i" in your directory that you want protected!
> then "rm *" expands to "rm -i file1 file2 file3 ..."
> (unless you have other files beginning with weird characters)

What about when you have a file called '-a' in your directory as well !
Seems to me that appropriate use of write protections is a better solution.
Failing that, how about an alias

% alias rm 'rm -i'

Neil/.

djones@megatest.UUCP (Dave Jones) (11/06/87)

An old version of emacs we used to use created backup files tagged with
".BAK".   One day I quickly typed "rm *.BAK", or so I thought.  To
my horror, I looked at the command line and saw, "% rm *>BAK".  The
greater-than is a capital period, and I depressed the shift key just
a fraction of a second early.  The system was industriously deleting all
my files and piping the (empty) listing to a new file called BAK.

chip@ateng.UUCP (Chip Salzenberg) (11/11/87)

In article <1621@megatest.UUCP> djones@megatest.UUCP (Dave Jones) writes:
>An old version of emacs we used to use created backup files tagged with
>".BAK".   One day I quickly typed "rm *.BAK", or so I thought.  To
>my horror, I looked at the command line and saw, "% rm *>BAK".

I had a similar disaster with an editor that creates backups of the form
",filename".  I missed the comma and typed "rm *".  I now use a (safe!)
alias to do this deletion:

	alias b 'rm -f ,*'
-- 
Chip Salzenberg         "chip@ateng.UUCP"  or  "{codas,uunet}!ateng!chip"
A T Engineering         My employer's opinions are not mine, but these are.
   "Gentlemen, your work today has been outstanding.  I intend to recommend
   you all for promotion -- in whatever fleet we end up serving."   - JTK

chris@mimsy.UUCP (Chris Torek) (11/13/87)

In article <1621@megatest.UUCP> djones@megatest.UUCP (Dave Jones) writes:
>... One day I quickly typed "rm *.BAK", or so I thought.  To
>my horror, I looked at the command line and saw, "% rm *>BAK". ...
>The system was industriously deleting all my files and piping the
>(empty) listing to a new file called BAK.

Which, by the way, was also removed by rm.  The shells (csh, sh;
I have not tried ksh) perform `<' and `>' redirection before `*'
expansion.

	% cat * > together

will often fill up a file system, since `*' might expand to `ch1
ch2 ch3 ch4 index together'.  cat eventually starts copying from
the beginning of `together', appending to its end, which provides
more text for cat to read, which writes more, which provides more,
which . . . .
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

chris@mimsy.UUCP (Chris Torek) (11/13/87)

In article <9332@mimsy.UUCP> I wrote
>... The shells (csh, sh; I have not tried ksh) perform `<' and `>'
>redirection before `*' expansion.

Correction: only `csh' does this.

>	% cat * > together

This is also a bad example, as `cat' explicitly checks each input
file against cat's standard output, to prevent loops.  Using
something like `soelim' that does not have such checks will cause
such a loop.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

dlm@cuuxb.ATT.COM (Dennis L. Mumaugh) (11/13/87)

This  request  has  spawned  many  stories  involving  rm  *  (or
variants)  that were not intentional.  When we first started with
unix four people managed to destroy things within  the  same  day
that  way.  In my case, I had worked all night to build a new and
wondrous piece of software.  It all worked, etc.  So I was  doing
the  final  clean  up:  I  typed rm *.c instead of rm *.o and the
whole project went down the drain!

Shortly thereafter we made a change to the  shell:  in  the  code
that did global expansions (*,?) we set a flag and if the command
name was "rm* or del*" we  said  confirm:  and  printed  out  the
expanded list of files.

Even with this aid we still had people screw up but not nearly as
often.

Then there was the time we did rm -rf ../*

The moral of this is that the command  interpreters  need  to  be
modified to request confirmation of potentially fatal things such
as rm * and it must be in the command interpreter as the  command
itself  can't  know  whether  the  list  is an expanded list or a
individually entered list.

After that is fixed we can talk  about  Jim  Gillogly's  spelling
corrector shell.
-- 
=Dennis L. Mumaugh
 Lisle, IL       ...!{ihnp4,cbosgd,lll-crg}!cuuxb!dlm

dhb@rayssd.RAY.COM (David H. Brierley) (11/16/87)

If users removing all of their files by inadvertently typing "rm *" is
a habitual problem at your site, why not make the command default to
interactive mode?  If you have source this is a trivial task and if you
don't have source it's not much harder.  Simply move the real rm
command to some new secret place, for example: /bin/.hidden/rm, and
then make /bin/rm be a shell script which invokes the real rm with the
"-i" flag.  If you wanted to be real fancy you could add a new option,
say "-I", which would disable interactive mode.  Another possibility
would be to have the shell script enable interactive mode if you try to
remove more than some pre-determined number of files.  That way you
could still type "rm foo" without having to use interactive mode but
"rm foo *" would put you into interactive.

Of course, nothing you can do will ever solve the problem completely
since even the most expert user will occasionally make mistakes.  Just
the other day I wiped out a weeks worth of work by typing "cc -o pgm.c"
on my AT&T unix-pc.  I had meant to use -O to invoke the optimizer.
Instead, it optimized away all of my code by giving me a message to the
effect of "no source file" and happily creating a zero length output
file called "pgm.c".  I was not at all amused.  I now have a shell
script in place of cc which checks all its arguments for consistency
(i.e. you can't say "-o pgm.c").
-- 
	David H. Brierley
	Raytheon Submarine Signal Division
	1847 West Main Road
	Portsmouth, RI 02871

Phone:		(401)-847-8000 x4073
Internet:	dhb@rayssd.ray.com
Uucp:		{cbosgd, gatech, linus, mirror, necntc, uiucdcs} !rayssd!dhb

andy@rocky.STANFORD.EDU (Andy Freeman) (11/17/87)

In article <1402@cuuxb.ATT.COM> dlm@cuuxb.UUCP (Dennis L. Mumaugh) writes:
[We're talking about "rm *".  Guess why I have a csh alias for rm that
 always asks about every file.  When I'm absolutely sure that I want to
 delete a number of files and I don't want to answer questions, I pipe
 the whole command off to sh.]

>The moral of this is that the command  interpreters  need  to  be
>modified to request confirmation of potentially fatal things such
>as rm * and it must be in the command interpreter as the  command
>itself  can't  know  whether  the  list  is an expanded list or a
>individually entered list.

There are far more general solutions.  Most people have trash cans.
One can recover their contents for some time, but they go away at
well defined times.  Too bad unix doesn't handle generations in
the file system (rcs and friends are clever archivers - they are
still useful in this context).  Obsolete versions can be marked
"deleted" so they aren't normally visible, but they can be retrieved.
Then it makes sense to have a file deleter that tells you what it
has done, just to reduce the chance of surprise.  (Yes, there should
be bozo mode for people who don't want to know or for programs that
think they know what they are doing.)

One should never simplify more than necessary.

-andy
-- 
Andy Freeman
UUCP:  {arpa gateways, decwrl, sun, hplabs, rutgers}!sushi.stanford.edu!andy
ARPA:  andy@sushi.stanford.edu
(415) 329-1718/723-3088 home/cubicle

wcw@psuhcx (William C Ward) (11/17/87)

In article <1689@rayssd.RAY.COM> dhb@rayssd.RAY.COM (David H. Brierley) writes:
>If users removing all of their files by inadvertently typing "rm *" is
>a habitual problem at your site, why not make the command default to
>interactive mode? 

The rm * disaster catches not only the absent-minded, but also the hasty
and uncoordinated.  I once mistyped a command like:
	rm *&foo
instead of rm *foo (*& is a double-strike of adjacent keys!) and the
machine obediently and hastily removed all files in the directory via a 
background process.  My screams were audible many doors down the hall as
I looked helplessly at the screen.

What I have done to lessen future disasters of this kind is to insert the
following crontab entry for other users:
# Keep second copies of recent source files (*.c, *.f, *.h) in /tmp
30 * * * * nice -10 find /usr/usr -mtime -1 -name *.[cfh] -exec cp {} /tmp \;
# Get rid of old /tmp files
0 2 * * * find /tmp -atime +4 -exec rm -f {} \;

If incremental dumps are done at least every 4 days, this means that
most source development work that can be lost is one hour's worth, if
your disk doesn't crash entirely.  The extra load on a small system 
with a little extra space and 10 or 20 users is pretty negligible, since
only files which have been modified in the last hour are copied.  If
security is a concern, the backup files (owned by root) can be set to
600 mode.  Moreover, it protects against `generic' disasters (rm, cp,
cc -o, or foolish edits).

This has saved me more than once now!
Bill Ward			Bitnet:	WCW@PSUECL
Noise Control Laboratory	UUCP:	{gatech,rutgers,..etc.}!psuvax1!ncl!wcw
The Penn. State University	USnail:	157 Hammond Bldg.;
Fone:		(814)865-7262	University Park, PA 16802

gwyn@brl-smoke.UUCP (11/17/87)

In article <1689@rayssd.RAY.COM> dhb@rayssd.RAY.COM (David H. Brierley) writes:
>If users removing all of their files by inadvertently typing "rm *" is
>a habitual problem at your site, why not make the command default to
>interactive mode?

Please don't fuck with the standard commands.  If you're going to
change the semantics, give it a new name and retain the old one
for applications that expect the documented semantics.

roy@phri.UUCP (Roy Smith) (11/17/87)

In article <1689@rayssd.RAY.COM> dhb@rayssd.RAY.COM (David H. Brierley) writes:
> Just the other day I wiped out a weeks worth of work by typing
> "cc -o pgm.c" on my AT&T unix-pc.

	This may sound harsh, but I really have little sympathy in this
case.  That zapping a source file wipes out a week's worth of work implies
that you don't make daily backups.  Even on a PC, doing backups should be
routine every day; there really is little excuse for not doing so.

	Things like emacs's "~" backup files (I'm not familiar with other
editors; I assume this feature is available in vi, etc, as well) mitigate
the damage from "rm *.c" instead of "rm *.o", and similar disasters (at the
cost of some wasted disk space), but daily backups are really the bottom
line.  In fact, I have given serious thought to running incremental
disk-to-disk dumps several times a day here to narrow the window of
vulnerability from a whole day to a few hours.  Yes, I know dumps on live
file systems don't always work, but it's better than not doing it at all.
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

bak@csd_v.UUCP (11/18/87)

In article <763@rocky.STANFORD.EDU>, andy@rocky.STANFORD.EDU (Andy Freeman) writes:
> There are far more general solutions.  Most people have trash cans.
> One can recover their contents for some time, but they go away at
> well defined times....

I use a version of rm adapted from Wizard's Grabbag in UNIX/XENIX world.
It simply prepends '#' to the file name if no swithces are listed in 
the command line.  Thus

		$ rm foo      

creates a file #foo, while

		$ rm -[i|f|r] foo

all work normally.  Since # is the shell comment charater it is very
hard to unintentionally delete files with names beginning with it.

My crontab contains the line

10 3 * * 0,2,4,6 find / \( -name '#*' -o -name 'tmp.*' -o -name '*.tmp' -o -name 'temp.*' -o  -name '*.temp' \) -mtime +3 -exec rm -f {} \;

which deletes files beginning with '#' which have been unmodified for 3 days.

If disk space is a problem you can cut down the -mtime value.  This script 
has saved me grief on more than one occasion.
-- 
Bruce Kern -- Computer Systems Design, 29 High Rock Rd., Sandy Hook, Ct. 06482
uunet!swlabs!csd_v!bak

ustel@well.UUCP (Mark Hargrove) (11/18/87)

> Xref: well comp.cog-eng:280 comp.unix.xenix:1083 comp.unix.wizards:5329
> 
> In article <1402@cuuxb.ATT.COM> dlm@cuuxb.UUCP (Dennis L. Mumaugh) writes:
> [We're talking about "rm *".  Guess why I have a csh alias for rm that
>  always asks about every file.  When I'm absolutely sure that I want to
>  delete a number of files and I don't want to answer questions, I pipe
>  the whole command off to sh.]

In a similar vein, we have a shell function defined in /etc/profile
for our Bourne Shell users:

rm(){
	if [ ! -d /usr/tmp/$LOGNAME ] ; then
		mkdir /usr/tmp/$LOGNAME
	fi
	mv $* /usr/tmp/$LOGNAME
}

Then we have /usr/lbin/reallyrm linked to /bin/rm for when
you really mean it.

A once a week cron script cleans out /usr/tmp right AFTER a backup.

This DOESN'T fix the ol' slip of the fingers that results in
reallyrm * .o  <---you only see the space AFTER you hit return ;-)
but it HAS saved the day enough times to make it worth the 10 minutes
it took to implement.

Mark Hargrove
U.S. TeleCenters
{backbones}!hplabs!well!ustel

clif@chinet.UUCP (11/18/87)

I dunno if this is quite a human interface tale of woe, but...
  I recently lost my hd0 to a power supply problem.  
  No problem, says I, once I fixed the power supply, I have a dump level
0 backup from one month back, and a crontab entry that does a dump level
2 of hd0 to a file hd1 each morning at 06:00.
  After restoring things from floppy, it was somewhat after midnight, and
I decided to complete the task the following day.
  At 06:00, right on schedule, the machine did a level 2dump, over the
file of good data on hd1.  Pow, in one swell foop my clever method of 
making sure I couldn't lose any data had lost me a months worth.

  Moral:  I dunno.  Maybe don't leave the machine running unattended
until you've completely fixed things up.  

-- 
------------------------------------------------------------------------
My Opinions are my own. I can't imagine why anyone else would want them.
Clif Flynt	ihnp4!chinet!clif
------------------------------------------------------------------------

rapaport@sunybcs.uucp (William J. Rapaport) (11/18/87)

After wiping out one too many directories, I aliased rm to:

'mv \!:1 #\!:1'

Now, it is impossible for me to execute:  rm *

It slows me down a bit when I do want to rm lots of stuff, but the price
is well worth the insurance.

jec@nesac2.UUCP (John Carter ATLN SADM) (11/18/87)

In article <3032@phri.UUCP>, roy@phri.UUCP (Roy Smith) writes:
> In article <1689@rayssd.RAY.COM> dhb@rayssd.RAY.COM (David H. Brierley) writes:
> > Just the other day I wiped out a weeks worth of work by typing
> > "cc -o pgm.c" on my AT&T unix-pc.
> 
> 	This may sound harsh, but I really have little sympathy in this
> case.  That zapping a source file wipes out a week's worth of work implies
> that you don't make daily backups.  Even on a PC, doing backups should be
> routine every day; there really is little excuse for not doing so.

My multi-user systems get daily backups - my PC gets infrequent
backups, except for some critical items (my LAN database).

However, in the original case, it appears that the  unix-pc has an
old and rather braindead compiler - the ones I use (DEC 11/70, AT&T
3B2, 3B5) respond to 'cc -o file.c' with 'would overwrite source'
and then abort.  Getting 'cc -o' instead of 'cc -O' is very easy.
-- 
USnail: John Carter, AT&T, Atlanta RWC, 3001 Cobb Parkway, Atlanta GA 30339
Video:	...ihnp4!cuea2!ltuxa!ll1!nesac2!jec    Voice: 404+951-4642
(The above views are my very own. How dare you question them? :-)

djones@megatest.UUCP (Dave Jones) (11/21/87)

in article <1689@rayssd.RAY.COM>, dhb@rayssd.RAY.COM (David H. Brierley) says:
> Xref: dlb comp.cog-eng:259 comp.unix.xenix:1008 comp.unix.wizards:4836
> 
> If users removing all of their files by inadvertently typing "rm *" is

 ...

> Of course, nothing you can do will ever solve the problem completely
> since even the most expert user will occasionally make mistakes.  Just
> the other day I wiped out a weeks worth of work by typing "cc -o pgm.c"
> on my AT&T unix-pc.  I had meant to use -O to invoke the optimizer.
> Instead, it optimized away all of my code by giving me a message to the
> effect of "no source file" and happily creating a zero length output
> file called "pgm.c".  I was not at all amused.  I now have a shell
> script in place of cc which checks all its arguments for consistency
> (i.e. you can't say "-o pgm.c").
> -- 
> 	David H. Brierley
> 	Raytheon Submarine Signal Division
> 	1847 West Main Road
> 	Portsmouth, RI 02871
> 
> Phone:		(401)-847-8000 x4073
> Internet:	dhb@rayssd.ray.com
> Uucp:		{cbosgd, gatech, linus, mirror, necntc, uiucdcs} !rayssd!dhb


I guess I had been programming about two months when it occured to me
that a program should always open all the input-files that it can
before it opens ANY output-files.  Somebody forgot to tell the writer of
your cc.  Sigh.  (When output is going to a disc-file, a program
should write it first to a temporary, then if there is no error, move it
to the real place.)

It is interesting that the same principle can apply to microprocessor
hardware:  instructions which read all their inputs, and then write one
output can be restarted from the beginning after a page-fault at any
step in the instruction.  The T.I. 990 microprocessor line had some
instructions which were not like that.  They made it hard to upgrade
to virtual memory when all the competators did.  So far as I know, the
990 is pretty much a dinasaur now.

msb@sq.UUCP (11/21/87)

> The rm * disaster catches not only the absent-minded ...

I thought it was about time someone expressed the opposite point of view.

If I type "rm *", it is because I want to remove all the files.  No, not
all *my* files.  All *the* files that I still have write permission on,
that are in the current directory.  Usually no more than about 20 of them.
In short, the proper UNIX* flavored method for protecting important files
from "rm" is to turn off the write permission bit.

Now, if you want to talk about human interface disasters and "rm" ...
Tell me how come "rm ... &" causes the -f flag to be assumed, and thus
removes the write-protected files after all?  Write-protecting the directory
stops it, but this is often not feasible.  I think the gods nodded on that one.

Mark Brader, utzoo!sq!msb, msb@sq.com		C unions never strike!

*"UNIX is a trademark of Bell Laboratories" is a religious incantation.
  That it no longer reflects reality is a bug in reality.

allbery@ncoast.UUCP (11/21/87)

As quoted from <3032@phri.UUCP> by roy@phri.UUCP (Roy Smith):
+---------------
| In article <1689@rayssd.RAY.COM> dhb@rayssd.RAY.COM (David H. Brierley) writes:
| > Just the other day I wiped out a weeks worth of work by typing
| > "cc -o pgm.c" on my AT&T unix-pc.
| 
| 	This may sound harsh, but I really have little sympathy in this
| case.  That zapping a source file wipes out a week's worth of work implies
| that you don't make daily backups.  Even on a PC, doing backups should be
| routine every day; there really is little excuse for not doing so.
+---------------

NO REASON?!  When it takes 50 floppies to back up the HD, there is VERY MUCH
a reason.  (Tape?  You got $1500 free to give me for a tape drive?)  I back
up my home directory, and if the disk crashes I just do a full reinstall.
This is no slower than reloading lots of disks....

For the "rm" problem, I think I've got a solution.  The idea comes from a
cross between existing "rm/unrm" programs and fsck, and deals with links
as well.

(1) For every mounted filesystem PLUS the root, create a directory called
"wastebasket" or some such.

(2) The program "del" (NOT "rm" -- you'll screw up programs which invoke
rm via system(), such as the System V spooler) links a file into the
wastebasket directory for a filesystem by its inode number, and writes a
line into an index file consisting of inum, path, and date and time.  Maybe
also the user who did it.

(3) The program "undel" links the file back out of the wastebasket to its
original path, via the index.

(4) A program "expdel" (expunge deleted files) uses the index to choose
files del'ed more than some specified or default time ago and unlinks them.

By using rename() under BSD or SVR3, or using root privs under SVR2 or older,
this can be generalized to directories as well, giving a safe rmdir as well.

Note that this retains all links (except symbolic ones, but that's part and
parcel of the problems with a symlink -- not to start THAT war again, but
there isn't a whole lot to be done about it), and the expunge process does
not have to search every user's home directory either.  The result is a
reversible rm which doesn't have any of the drawbacks of current ones.
-- 
Brandon S. Allbery		      necntc!ncoast!allbery@harvard.harvard.edu
{hoptoad,harvard!necntc,{sun,cbosgd}!mandrill!hal,uunet!hnsurg3}!ncoast!allbery
			Moderator of comp.sources.misc

dave@onfcanim.UUCP (11/22/87)

In article <3032@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>
>In fact, I have given serious thought to running incremental
>disk-to-disk dumps several times a day here to narrow the window of
>vulnerability from a whole day to a few hours.  Yes, I know dumps on live
>file systems don't always work, but it's better than not doing it at all.

There is an even better way.

We run a "backup daemon", originally written by Ciaran o'Donnell at Waterloo,
and still in use there, which is called from the crontab (every hour in
our case) to scan a list of filesystems looking for files that were
changed since it was last run.  When it finds one, and it isn't too large
and its name doesn't pattern-match a list of "not worthwhile" names like
"*.out", it copies it into a backup fileystem.

If the original filename was /u/dave/film.c, the copy will be named
/backup/u/dave/film.c/Nov20-19:01.  If I change the file again, it will
be backed up again an hour later, with a filename that reflects the changed
time or date.  Then, when I trash a file through carelessness, I have
a whole "history" of backup copies to go back through, so even if I introduced
a bug 5 hours ago, I can generally get back the code before that.
And I don't have to run "restore" to look for it; I just chdir to
/backup/u/dave/film.c and look around.

The /backup filesystem must be dedicated to the use of the backup program,
since it keeps it from filling up by deleting the oldest files as necessary
to make room for the new ones.  We use a 30-Mb partition, which seems to
keep stuff around for about a month on a system with 4 people writing code.

The only way I still lose files is if I clobber them within the first hour
of working on them (often it's withing the first 2 seconds when it happens!)
and the file hadn't been touched for 2 months before that, so all old
copies have been deleted.  So then I have to get out the tapes.
But it works most of the time, gives me a backup every hour of a file that
I am changing frequently, and requires no work on my part at all.

cameron@elecvax.eecs.unsw.oz (Cameron Simpson "Life? Don't talk to me about life.") (11/23/87)

In article <1402@cuuxb.ATT.COM>, dlm@cuuxb.ATT.COM (Dennis L. Mumaugh) says:
| After that is fixed we can talk  about  Jim  Gillogly's  spelling
| corrector shell.
| -- 
| =Dennis L. Mumaugh
|  Lisle, IL       ...!{ihnp4,cbosgd,lll-crg}!cuuxb!dlm

I once used something calling itself `nsh' on a System V machine, and typed
	$ cd thnig
and thought "bother, I meant `thing'" and was then disconcerted when it said
	path/thing
	$
back at me. It had fixed the transposed chacters and dropped me in the right
spot! Hopefully it only happened in interactive mode, but it was very
disconcerting.
	- Cameron Simpson

nortond@mosys.UUCP (Daniel A. Norton) (11/23/87)

Distribution:



On the version of Unix V.3 here (CTIX), when a new user enters a
password he/she will invariably choose a password of less than
six characters, to which the system replies:

	Password is too short - must be at least 6 digits

Fortunately, they do not usually notice the word "digits" (as
opposed to characters).  Unfortunately, when they attempt to
satisfy the program, it usually replies:

	Password must contain at least two alphabetic characters and
	at least one numeric or special character.

In other words, the first "help" message was not specific enough
as to the password requirements.  I would not expect a BNF description
of what to type in here, we must assume that the user has _some_
intuition, but seriously folks.
-- 
Daniel A. Norton				nortond@mosys.UUCP
c/o Momentum Systems Corporation	     ...uunet!mosys!nortond
2 Keystone Avenue
Cherry Hill, NJ   08003 			609/424-0734

jfh@killer.UUCP (11/24/87)

In article <1987Nov21.014754.19660@sq.uucp>, msb@sq.UUCP writes:
> > The rm * disaster catches not only the absent-minded ...
> 
> I thought it was about time someone expressed the opposite point of view.
> 
> If I type "rm *", it is because I want to remove all the files.  No, not
> 
> Mark Brader, utzoo!sq!msb, msb@sq.com		C unions never strike!

I cast my vote for doing the remove.  I'd also like rm to consider asking
me to confirm the decision if I should happen to delete, say, more than
10 or 15 files.  Having the first few lines in main() be something like,

	fflg = (argc > 15) || fflg;

might be nice, or having a prompt, ala' MessyDos (yick) might be nice.

Thoughts?

- John.
-- 
John F. Haugh II                  SNAIL:  HECI Exploration Co. Inc.
UUCP: ...!ihnp4!killer!jfh                11910 Greenville Ave, Suite 600
      ...!ihnp4!killer!rpp386!jfh         Dallas, TX. 75243
"Don't Have an Oil Well?  Then Buy One!"  (214) 231-0993

goudreau@xyzzy.UUCP (11/25/87)

In article <1987Nov21.014754.19660@sq.uucp> msb@sq.UUCP (Mark Brader) writes:
>> The rm * disaster catches not only the absent-minded ...
>
>I thought it was about time someone expressed the opposite point of view.
>
>If I type "rm *", it is because I want to remove all the files.  No, not
>all *my* files.  All *the* files that I still have write permission on,
>that are in the current directory.  Usually no more than about 20 of them.
>In short, the proper UNIX* flavored method for protecting important files
>from "rm" is to turn off the write permission bit.

I'm sorry if that's what you want, because that's not what your system
is going to do.  I quote from the rm(1) entry in the 7th Edition
Programmer's Manual:

	"Removal of a file requires write permission in its directory,
	 but neither read nor write permission on the file itself."

Protecting your files in this way is thus an all-or-nothing method,
per directory.

A better to way to understand this is to think about what's really
going on at the directory level.  When you remove (or move) a file
within a directory, you never need to read or write the file itself.
You need to rewrite the directory because you wish to change the
contents of the directory file (its dir entries), and so write permission
in the directory is what is required.

barnett@steinmetz.ge.com (Bruce G Barnett) (11/25/87)

In article <2205@killer.UUCP> jfh@killer.UUCP (The Beach Bum) writes:

[ rm could ask for comfirmation if more than 15 files were to be deleted]

| Having the first few lines in main() be something like,
|
|	fflg = (argc > 15) || fflg;
|
|might be nice, or having a prompt, ala' MessyDos (yick) might be nice.
|
|Thoughts?

Don't change rm when you could have a shell script do the same thing!

Sheesh!

msb@sq.UUCP (11/27/87)

Having had my knowledge of UNIX* insulted in public, I feel obliged to
reply in public.  This is positively my last posting on the topic.
[And if you see it twice, it's not MY fault, I canceled the first one.]

> >In short, the proper UNIX flavored method for protecting important files
> >from "rm" is to turn off the write permission bit.

> I'm sorry if that's what you want, because that's not what your system
> is going to do.

And then he quotes the V7 manual at me, and explains why permissions work
as they do.  Well, he should have read one more paragraph:

#   If the a file has no write permission and the standard input is a
#   terminal, its permissions are printed and a line is read from the
#   standard input.  If that line begins with `y' the file is deleted,
#   otherwise the file remains...

This is precisely the kind of interactive prompting that one school of
"rm is too powerful" users like.  But you only get it when you want it.
Sure, write protecting the file doesn't affect what rm has *permission*
to do ... it affects what it *will* do.

As I said in my original posting, I do consider it a misfeature that
if stdin is NOT a terminal then rm proceeds regardless of the file's
permissions.  I think the -f flag should be required in that mode also.
(I also think that having said that should have been sufficient
prevention from having UNIX basics explained to me on the net.)

While I'm posting, I'll add the bit I left out the first time.  I have
made it a habit *not* to hit Return instantly upon typing a line that
has both "rm" and "*" in it.  I pause and reread it.  It's an easy habit
to establish, and it's all the protection I think I need against "rm * .o".

Mark Brader		"Male got pregnant -- on the first try."
utzoo!sq!msb			Newsweek article on high-tech conception
msb@sq.com			November 30, 1987

*"UNIX is a trademark of Bell Laboratories" is a religious incantation.

hubcap@hubcap.UUCP (Mike Marshall) (12/01/87)

In article <1987Nov27.011955.10801@sq.uucp>, msb@sq.uucp (Mark Brader) writes:
> While I'm posting, I'll add the bit I left out the first time.  I have
> made it a habit *not* to hit Return instantly upon typing a line that
> has both "rm" and "*" in it.  I pause and reread it.  It's an easy habit
> to establish, and it's all the protection I think I need against "rm * .o".

I agree. I can be as scatter brained as they come, but I have cultivated the
above habit, and I don't think I have EVER lost any files with "rm * .o" 
(or whatever). I always automatically reread whatever I've typed when 
using rm, it's not a hassle, cause I do it without thinking. 

Another habit that I have extablished is "rm -i" whenever I am su'ed to root.

You can take your good habits with you to a new environment... but maybe not
your aliases :-).

-Mike Marshall       hubcap@hubcap.clemson.edu        ...!hubcap!hubcap

allbery@ncoast.UUCP (12/01/87)

As quoted from <392@xyzzy.UUCP> by goudreau@xyzzy.UUCP (Bob Goudreau):
+---------------
| In article <1987Nov21.014754.19660@sq.uucp> msb@sq.UUCP (Mark Brader) writes:
| >If I type "rm *", it is because I want to remove all the files.  No, not
| >all *my* files.  All *the* files that I still have write permission on,
| 
| I'm sorry if that's what you want, because that's not what your system
| is going to do.  I quote from the rm(1) entry in the 7th Edition
| Programmer's Manual:
| 
| 	"Removal of a file requires write permission in its directory,
| 	 but neither read nor write permission on the file itself."
+---------------

True enough -- at the level of unlink().  But if you'll unalias (or un-
function, if you're a System V type) rm for a moment and try to "rm" a file
which is write-protected without using the "-f" flag, you'll see:

bsd% rm foo
foo 444 mode _

$ rm foo	#system V
foo: 444 mode ? _

The biggest problem with this is that it's rather difficult to edit a C
program that's been "rm"-proofed in this manner....
-- 
Brandon S. Allbery		      necntc!ncoast!allbery@harvard.harvard.edu
 {hoptoad,harvard!necntc,cbosgd,sun!mandrill!hal,uunet!hnsurg3}!ncoast!allbery
			Moderator of comp.sources.misc

franka@mmintl.UUCP (12/01/87)

[I have directed follow-ups to comp.cog-eng only.]

In article <1987Nov27.011955.10801@sq.uucp> msb@sq.UUCP (Mark Brader) writes:
>While I'm posting, I'll add the bit I left out the first time.  I have
>made it a habit *not* to hit Return instantly upon typing a line that
>has both "rm" and "*" in it.  I pause and reread it.  It's an easy habit
>to establish, and it's all the protection I think I need against "rm * .o".

I agree.  Without having particularly thought about it, I do the same thing.
I suspect that most experienced programmers do, too.

This, of course, makes it no less a human interface problem.  The only
people who can fix the problem are the people who don't need to.
-- 

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

brianc@cognos.uucp (Brian Campbell) (12/02/87)

In article <1987Nov27.011955.10801@sq.uucp> msb@sq.UUCP (Mark Brader) writes:
> In short, the proper UNIX flavored method for protecting important files
> from "rm" is to turn off the write permission bit.

Marking selected files as read-only is often useful for protecting files
in the singular sense.  However, it is also possible to protect an
entire directory from accidental erasure with:

   touch \!
   chmod -w \!

Now, when the careless (?) user enters "rm * .o" or any variation
thereof (excluding the addition of -f), the first file rm will encounter
will be !  (unless someone has filenames starting with spaces or other
unprintables).  rm will ask if the user really wants to delete this
file.  At this point, an INTR will stop rm from deleting any files at
all; answering n will simply tell rm not to delete that single file.

> As I said in my original posting, I do consider it a misfeature that
> if stdin is NOT a terminal then rm proceeds regardless of the file's
> permissions.  I think the -f flag should be required in that mode also.
> (I also think that having said that should have been sufficient
> prevention from having UNIX basics explained to me on the net.)

I do not think this is a "misfeature".  With shell scripts and system() calls
I have a chance after I have typed the command to verify that it is indeed
what I wanted.  When interactive, its too late after I've pressed return.
-- 
Brian Campbell        uucp: decvax!utzoo!dciem!nrcaer!cognos!brianc
Cognos Incorporated   mail: POB 9707, 3755 Riverside Drive, Ottawa, K1G 3Z4
(613) 738-1440        fido: (613) 731-2945 300/1200, sysop@1:163/8

dave@lsuc.uucp (David Sherman) (12/03/87)

cameron@elecvax.eecs.unsw.oz (Cameron Simpson) writes:
>I once used something calling itself `nsh' on a System V machine, and typed
>	$ cd thnig
>and thought "bother, I meant `thing'" and was then disconcerted when it said
>	path/thing
>	$
>back at me. It had fixed the transposed chacters and dropped me in the right
>spot! Hopefully it only happened in interactive mode, but it was very
>disconcerting.

We have that in our Bourne shell here.  You get used to it very
quickly, and it's VERY handy.  Yes, it only works in interactive mode.

As far as I remember, the origins of spelling-correction for chdir
in sh go back to Tom Duff adding it to the v6 shell at U of Toronto
around 1976 or so.  I then pulled out td's spname() routine and
began plugging it into other utilities on our v6 11/45, when
used interactively (p, cmp, and a few others).  The routine
accompanied Rob Pike on his travels when he left U of T, and
it shows up in Kernighan & Pike (with credit to Duff, I believe;
don't have a K&P handy).

In the original version, it would ask you:
	$ cd /ibn
	cd /bin? y
	$
The version of sh currently on our system (I got this part
from sickkids!mark) doesn't bother asking, which I think is right
because you often type ahead and don't want some command swallowed
as an answer to "cd foo?".  It just does it:
	$ cd /ibn
	cd /bin
	$

If anyone with a source license wants the code in sh to
implement this, let me know.  It's pretty trivial once you have
spname(3), which we use all over the place now (more(1), for example).

David Sherman
The Law Society of Upper Canada
-- 
{ uunet!mnetor  pyramid!utai  decvax!utcsri  ihnp4!utzoo } !lsuc!dave
Pronounce it ell-ess-you-see, please...

wcs@ho95e.ATT.COM (Bill.Stewart) (12/07/87)

In article <771@hubcap.UUCP> hubcap@hubcap.UUCP (Mike Marshall) writes:
:I agree. I can be as scatter brained as they come, but I have cultivated the
:above habit, and I don't think I have EVER lost any files with "rm * .o" 

Must be nice.  I had a spurious file once, called * , and removed it.
I realized what I'd done about the time the $ came back; this was when
I learned about nightly backups (the administrators did them), and rm -i.

At Purdue, the local version of 4.*BSD had modified rm to move things
to /tmp/graveyard instead of really deleting them; they'd stick around
48 hours or so.  You could use the real rm if you wanted to.  Of
course, this doesn't prevent other ways of trashing files, though
noclobber helps.  One of the few things I appreciate about VMS is the
file versioning; every time you modify a file, it creates a new copy of
it (I assume at open-file-for-writing time?).  Even a one-deep automatic
backup would be helpful; emacs does this but vi and ed don't.
-- 
#				Thanks;
# Bill Stewart, AT&T Bell Labs 2G218, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs