davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (06/14/91)
Has anyone written a remove command which will unlink all links to a file inode when the command is issued on any one name? Yes I know it's dangerous, but there are times when I want certain data unconditionally off my system. Example: $ mkdir foo $ cd foo $ date >x1 $ ln x1 x2 $ remove x1 # this is the one $ ls $ -- bill davidsen (davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen) "Most of the VAX instructions are in microcode, but halt and no-op are in hardware for efficiency"
ask@cbnews.cb.att.com (Arthur S. Kamlet) (06/16/91)
In article <3431@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes: >Has anyone written a remove command which will unlink all links to a >file inode when the command is issued on any one name? The heart of this command is to find the inumber; then remove all files in that filesystem with that inumber. For SVR3 Unixes - at least - you can find the inumber by an ls -lai $ ls -lai junk 397 -rw-r--r-- 1 ask user 571 Mar 5 15:07 junk So, you now want to remove all files with inumber 397 You need to know the root directory of your filesystem (use the /etc/mount or df command to find out if you are unsure) Then do a find and remove all occurences of that inumber $ cd /usrc # /usrc is the filesystem containing the files $ find . -inum 397 -exec rm {} \; Caution: inumbers are not unique in your system; only in your filesystem. So it's a very bad idea to do a find / -inum 397 ..... These are the basics; you might want to put these together in a script and add some tests. -- Art Kamlet a_s_kamlet@att.com AT&T Bell Laboratories, Columbus
darcy@druid.uucp (D'Arcy J.M. Cain) (06/16/91)
In article <1991Jun15.210940.18999@cbnews.cb.att.com> ask@cblph.att.com writes: >So, you now want to remove all files with inumber 397 >You need to know the root directory of your filesystem >(use the /etc/mount or df command to find out if you are unsure) >Then do a find and remove all occurences of that inumber >$ cd /usrc # /usrc is the filesystem containing the files >$ find . -inum 397 -exec rm {} \; >Caution: inumbers are not unique in your system; only in your > filesystem. So it's a very bad idea to do a > find / -inum 397 ..... Don't try this at home kiddies. Not every filesystem is mounted on root. For example here is my system: / : Disk space: 20.54 MB of 31.64 MB available (64.95%). /usr : Disk space: 24.24 MB of 226.75 MB available (10.69%). /usr/spool/news : Disk space: 20.94 MB of 47.19 MB available (44.38%). Now if I want to get rid of /usr/darcy/file and its inode is 397 I better not try the above suggestion because there may be a file on my news partition with the same inode number. -- D'Arcy J.M. Cain (darcy@druid) | D'Arcy Cain Consulting | There's no government Toronto, Ontario, Canada | like no government! +1 416 424 2871 |
beattie@visenix.UUCP (Brian Beattie) (06/16/91)
In article <3431@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes: ->Has anyone written a remove command which will unlink all links to a ->file inode when the command is issued on any one name? Yes I know it's ->dangerous, but there are times when I want certain data unconditionally ->off my system. If the real aim is to remove the data how about: cat /dev/null >offendingfile rm offendingfile This may leave the inode if links exist but the file will have been truncated to zero length i.e. no data. To all nitpickers yes with some shells you can replace the cat command with: >offendingfile and if you do not know why that is not portable you should not try to correct me. The other solution is to use ncheck to find all links to said file NOTE if a process has the file open even that will not actually get rid of the inode. -- It is easier to build a | Brian Beattie (703)471-7552 secure system than it is | 11525 Hickory Cluster, Reston, VA. 22090 to build a correct system.| M. Gasser | ...uunet!visenix!beattie
det@hawkmoon.MN.ORG (Derek E. Terveer) (06/17/91)
>In article <1991Jun15.210940.18999@cbnews.cb.att.com> ask@cblph.att.com writes: >So, you now want to remove all files with inumber 397 >You need to know the root directory of your filesystem >(use the /etc/mount or df command to find out if you are unsure) >Then do a find and remove all occurences of that inumber >$ cd /usrc # /usrc is the filesystem containing the files >$ find . -inum 397 -exec rm {} \; >Caution: inumbers are not unique in your system; only in your > filesystem. So it's a very bad idea to do a > find / -inum 397 ..... Howver, depending on the system and the particular find command, not every find command has "-inum" as an option. I know that the GNU find command has this; howver, the more recent System V Unixen have the -xdev (GNU find) or -mount (System V) option to restrict the search to the implied (by the pathname, "." in your example) file system. If you don't have "-inum" you could, as root, use "ff /dev/rdsk/? | grep 397" to find the file belonging to that inode. Then remove it. See ff(1) and find(1) in the FM. derek -- Derek "Tigger" Terveer det@hawkmoon.MN.ORG -- U of MN Women's Lax I am the way and the truth and the light, I know all the answers; don't need your advice. -- "I am the way and the truth and the light" -- The Legendary Pink Dots
boyd@prl.dec.com (Boyd Roberts) (06/17/91)
In article <1991Jun17.050747.1436@hawkmoon.MN.ORG>, det@hawkmoon.MN.ORG (Derek E. Terveer) writes: >> > If you don't have "-inum" you could, as root, use "ff /dev/rdsk/? | grep 397" > to find the file belonging to that inode. Then remove it. See ff(1) and > find(1) in the FM. If you really want to do this, aren't you better off using ncheck(8)? It should be fairly standard. Admittedly the file-system to block-special mapping may be a bit of pain, but the basic premise of this thread is too. Boyd Roberts boyd@prl.dec.com ``When the going gets wierd, the weird turn pro...''
doug@jhunix.HCF.JHU.EDU (Douglas W O'neal) (06/17/91)
In article <3431@crdos1.crd.ge.COM-> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
->Has anyone written a remove command which will unlink all links to a
->file inode when the command is issued on any one name? Yes I know it's
->dangerous, but there are times when I want certain data unconditionally
->off my system.
->
->Example:
-> $ mkdir foo
-> $ cd foo
-> $ date ->x1
-> $ ln x1 x2
-> $ remove x1 # this is the one
-> $ ls
-> $
->--
->bill davidsen (davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
-> "Most of the VAX instructions are in microcode,
-> but halt and no-op are in hardware for efficiency"
How about
find / -inum `ls -i foo | awk '{print $1}' -` -exec rm -f {} \; -print
Doug
--
Doug O'Neal, Distributed Systems Programmer, Johns Hopkins University
doug@jhuvms.bitnet, doug@jhuvms.hcf.jhu.edu, mimsy!aplcen!jhunix!doug
Like many of the features of UNIX, UUCP appears theoretically
unworkable... - DEC Professional, April 1990
tchrist@convex.COM (Tom Christiansen) (06/17/91)
From the keyboard of det@hawkmoon.MN.ORG (Derek E. Terveer): :>$ cd /usrc # /usrc is the filesystem containing the files :>$ find . -inum 397 -exec rm {} \; :>Caution: inumbers are not unique in your system; only in your :> filesystem. So it's a very bad idea to do a :> find / -inum 397 ..... : :Howver, depending on the system and the particular find command, not every find :command has "-inum" as an option. I know that the GNU find command has this; :howver, the more recent System V Unixen have the -xdev (GNU find) or -mount :(System V) option to restrict the search to the implied (by the pathname, "." :in your example) file system. Isn't inconsistency wonderful? This is why I'd suggest putting up GNU find everywhere. That way you can count on what options are there. Another possibility is to run it through the find2perl translator, which accepts such options even if your system doesn't. --tom -- Tom Christiansen tchrist@convex.com convex!tchrist "So much mail, so little time."
tchrist@convex.COM (Tom Christiansen) (06/17/91)
From the keyboard of doug@jhunix.HCF.JHU.EDU (Douglas W O'neal): :How about :find / -inum `ls -i foo | awk '{print $1}' -` -exec rm -f {} \; -print Remember inumbers are unique to the file system. You don't know that foo is on the root file system. This way you'll unnecessarily traverse the whole file. I might do this, at least on my system: (assume $file is the name of the file, and valid) find `df $file | awk 'NR ==2 {print $6}'` \ -xdev \ -inum `ls -i $file | awk '{print $1}'` \ -exec /bin/rm {} \; \ -print --tom -- Tom Christiansen tchrist@convex.com convex!tchrist "So much mail, so little time."
cmf851@anu.oz.au (Albert Langer) (06/18/91)
In article <922@visenix.UUCP> beattie@visenix.UUCP (Brian Beattie) writes: >If the real aim is to remove the data how about: > >cat /dev/null >offendingfile >rm offendingfile > >This may leave the inode if links exist but the file will >have been truncated to zero length i.e. no data. Does this also get rid of the data from the original file system blocks so they cannot be reconstructed with a file system debugger? I imagine it doesn't. That may not be the original question but I'd be interested in a simple shell way to do so. I assume that using cp to write a larger file full of meaningless data over the original file would work. Or might that result in a new file on some systems, with the original blocks just freed? I know it isn't as big a problem with unix, since deallocated blocks tend to be very hard to track down and get immediately reused anyway, but a simple equivalent to Norton's wipefile under DOS would still be useful and I'm sure it should be a simple shell one-liner (portable across all reasonable unixes). How does one fill a file with zeroes or whatever in as few keystrokes as possible? ed? -- Opinions disclaimed (Authoritative answer from opinion server) Header reply address wrong. Use cmf851@csc2.anu.edu.au
det@hawkmoon.MN.ORG (Derek E. Terveer) (06/18/91)
boyd@prl.dec.com (Boyd Roberts) writes: >In article <1991Jun17.050747.1436@hawkmoon.MN.ORG>, det@hawkmoon.MN.ORG (Derek E. Terveer) writes: >> If you don't have "-inum" you could, as root, use "ff /dev/rdsk/? | grep 397" >If you really want to do this, aren't you better off using ncheck(8)? >It should be fairly standard. Admittedly the file-system to block-special >mapping may be a bit of pain, but the basic premise of this thread is too. Er, uhm, ahhh, ugh, yeah. You're right. I just chose the first command off the top of my little head. Ncheck, because of its "-i inum" option would be better, although i don't think that ncheck is any more standard than ff. derek -- Derek "Tigger" Terveer det@hawkmoon.MN.ORG -- U of MN Women's Lax I am the way and the truth and the light, I know all the answers; don't need your advice. -- "I am the way and the truth and the light" -- The Legendary Pink Dots
boyd@prl.dec.com (Boyd Roberts) (06/18/91)
In article <1991Jun17.222800.8067@hawkmoon.MN.ORG>, det@hawkmoon.MN.ORG (Derek E. Terveer) writes: > [...] > Ncheck, because of its "-i inum" option would > be better, although i don't think that ncheck is any more standard than ff. > I think you'll find ncheck(8) has been around for quite a while, whereas `ff' is a System V-ism. I think ncheck(8) goes back as far as Version 7 with icheck(8) and dcheck(8), which pre-date fsck(8). Boyd Roberts boyd@prl.dec.com ``When the going gets wierd, the weird turn pro...''
philip@cetia.fr (Philip Peake) (06/19/91)
In article <3431@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes: >Has anyone written a remove command which will unlink all links to a >file inode when the command is issued on any one name? Yes I know it's >dangerous, but there are times when I want certain data unconditionally >off my system. Depending on what system you have, you may or may not have "clri" (usually in /etc). This thing just zeros an inode. You should only use it in single user mode, and then fsck the filesystem afterwards. Philip
bob@wyse.wyse.com (Bob McGowen x4312 dept208) (06/21/91)
In article <1991Jun17.181558.9562@newshost.anu.edu.au> cmf851@anu.oz.au (Albert Langer) writes: >In article <922@visenix.UUCP> beattie@visenix.UUCP (Brian Beattie) writes: > >>If the real aim is to remove the data how about: >> >>cat /dev/null >offendingfile >>rm offendingfile >> >>This may leave the inode if links exist but the file will >>have been truncated to zero length i.e. no data. > >Does this also get rid of the data from the original file system >blocks so they cannot be reconstructed with a file system debugger? ---deleted lines--- >How does one fill a file with zeroes or whatever in as few keystrokes >as possible? ed? > I do not know about the "in as few keystrokes as possible", but the following script will overwrite the file space. If you have SysV echo with octal escapes, you can overwrite with all nulls, if not the first character or two will be standard ascii followed by nulls. This procdure will only produce null padding (no choice) since it uses the dd option for pading input records to the input block size specified to dd (this will be clearer after you read the script, I think). THERE IS A MAJOR PROBLEM: The amount of system RAM available to the user will affect how big a file can be handled by this procedure. I have tested the basics from the command line on UNIX System V/386 on a text file of 1291 bytes. du output was 3 blocks. The result of the dd line made a file of 1536 (3*512) bytes, all nulls. I hope this is of interest to someone out there. ;-) The script also uses cut -f1 to get the first field from the output of du. If you do not have cut, use awk '{print $1}' instead. ------cut------------cut------------cut------ #!/bin/sh # # No error checking on file, for brevity. You should add it. file=$1 # determine number of blocks in the file blks=`du -a $file` # using SysV echo to generate a null for input to dd, use dd # conv=sync option to pad the input with nulls to the specified # block size, which just happens to be the file size. NOTE: the # size of the file will determine if this will work, since you # must have enough memory to hold the blocksize data that will be # written over the file. The usuall dd output is sent to /dev/null. /bin/echo '\0\c' | dd of=$file ibs=${blks}b conv=sync 2>/dev/null rm -f $file exit Bob McGowan (standard disclaimer, these are my own ...) Product Support, Wyse Technology, San Jose, CA ..!uunet!wyse!bob bob@wyse.com
cmf851@anu.oz.au (Albert Langer) (06/22/91)
In article <3202@wyse.wyse.com> bob@wyse.UUCP (Bob McGowan) quotes and writes: >>How does one fill a file with zeroes or whatever in as few keystrokes >>as possible? ed? > >I do not know about the "in as few keystrokes as possible", but the >following script will overwrite the file space. >[...] THERE IS A MAJOR PROBLEM: The amount of system RAM available >to the user will affect how big a file can be handled by this procedure. >I hope this is of interest to someone out there. ;-) Thanks Bob, I will keep the script handy. Anyone got any other ideas, that might overcome the RAM limitation and be few enough keystrokes to use casually without a script, or at least define as an alias? I still think ed might be useful. -- Opinions disclaimed (Authoritative answer from opinion server) Header reply address wrong. Use cmf851@csc2.anu.edu.au