rexw@hpvcfs1.HP.COM (Rex Wickenkamp) (01/04/90)
I have a couple of questions that maybe someone can answer. 1) I have access to a Compaq 386/20e. This Compaq seems to have a problem when exiting out of applications, as it will show an empty screen for approximately 15 seconds, before returning to the DOS prompt. Does anyone have any ideas what I can use to check this puppy out? 2) I have always wondered why CHKDSK, when used with the /F parameter, creates FILE0000.CHK files in the root directory. Can anyone give me a good explanation of this? This should be interesting! Rex
tcm@srhqla.SR.COM (Tim Meighan) (01/05/90)
In article <21990002@hpvcfs1.HP.COM> rexw@hpvcfs1.HP.COM (Rex Wickenkamp) writes: >2) I have always wondered why CHKDSK, when used with the /F parameter, > creates FILE0000.CHK files in the root directory. Can anyone give me > a good explanation of this? This is because CHKDSK is finding a lost chain of one or more clusters on your disk that doesn't seem to belong to a file. In order for the file system to recover the lost sectors and make them available for use again, they have to be attached to a directory entry, which is where the file system gets the first pointer to the chain. (The FAT contains the pointers to subsequent clusters in the chain, which is how CHKDSK determined there was a lost chain in the first place -- it found a chain allocated in the FAT that no directory entry pointed to.) CHKDSK creates a directory entry for each unnattached chain it finds in the FAT. (It generates a unique FILExxxx.CHK name for each entry by having xxxx be a 4-digit number that is incremented for each new filename it needs.) It then sets the directory pointer to the lost chain, and this makes the chain available for your use. It does this so you can examine the file if you think it might contain data you would like to try to salvage. You can also just erase the file and return the recovered clusters back to the pool of available storage on your disk. Note that CHKDSK could just as easily have automatically returned any lost chains back to the disk free pool. However, the lost clusters are purposely converted to files instead, to give you a chance to examine them just in case they contain something you need. CHKDSK will always tell you when there are lost clusters in the FAT. However, it won't actually convert the clusters to files unless you specifically tell it to with the /F (fix) switch as a command argument. If you don't convert the clusters to files, they remain lost, and you will never be able to use them for anything. Tim Meighan SilentRadio
fredb@llama.rtech.UUCP (Fred Buechler) (01/05/90)
In article <1236@srhqla.SR.COM> tcm@srhqla.SR.COM (Tim Meighan) writes: >In article <21990002@hpvcfs1.HP.COM> rexw@hpvcfs1.HP.COM >(Rex Wickenkamp) writes: > >>2) I have always wondered why CHKDSK, when used with the /F parameter, >> creates FILE0000.CHK files in the root directory. Can anyone give me >> a good explanation of this? > > {Deleted} > >Note that CHKDSK could just as easily have automatically returned >any lost chains back to the disk free pool. However, the lost clusters >are purposely converted to files instead, to give you a chance to >examine them just in case they contain something you need. > >CHKDSK will always tell you when there are lost clusters in the FAT. >However, it won't actually convert the clusters to files unless you >specifically tell it to with the /F (fix) switch as a command argument. >If you don't convert the clusters to files, they remain lost, and you >will never be able to use them for anything. > Actually, CHKDSK will just return the lost clusters to the free space pool IF you answer "NO" to the "Convert lost chains to files?" question AND you have specified the /F (fix) flag. The *.CHK files are only created to give you a chance to manually re-attach the chains where they belong. A tedious task, even with Norton Utilities, et al. Fred.
brown@vidiot.UUCP (Vidiot) (01/06/90)
In article <21990002@hpvcfs1.HP.COM> rexw@hpvcfs1.HP.COM (Rex Wickenkamp) writes:
<
<1) I have access to a Compaq 386/20e. This Compaq seems to have a problem
< when exiting out of applications, as it will show an empty screen for
< approximately 15 seconds, before returning to the DOS prompt. Does
< anyone have any ideas what I can use to check this puppy out?
I have such a beast and have never had this problem.
<2) I have always wondered why CHKDSK, when used with the /F parameter,
< creates FILE0000.CHK files in the root directory. Can anyone give me
< a good explanation of this?
When there are files allocated on the disk and there isn't a directory
entry to go along with it, CHKDSK makes such an entry. To only real place to
put it is in root. Before you ask how files can end up not having a directory
entry, but still have space allocated in the FAT, I will try and answer that.
The answer is: I don't know. I have seen programs screw up, like compilers,
that somehow leave FAT entries there but, the directory entries are gone.
What to do. Well, look in the file created and see if it is worth anything
or if it is left-over junk. If junk, delete it. Otherwise rename it and
put it where you want.
Oh, why are these files created? Well, it is to give you, the user, a chance
to see if the data recovered is worthless or not.
--
harvard\ att!nicmad\ cs.wisc.edu!astroatc!vidiot!brown
Vidiot ucbvax!uwvax..........!astroatc!vidiot!brown
rutgers/ decvax!nicmad/ INTERNET:<@cs.wisc.edu,@astroatc:brown@vidiot>
cs4g6ag@maccs.dcss.mcmaster.ca (Stephen M. Dunn) (01/06/90)
In article <21990002@hpvcfs1.HP.COM> rexw@hpvcfs1.HP.COM (Rex Wickenkamp) writes:
$2) I have always wondered why CHKDSK, when used with the /F parameter,
$ creates FILE0000.CHK files in the root directory. Can anyone give me
$ a good explanation of this?
Well, to put it bluntly, because that's what it's supposed to do!
These files are created when CHKDSK/F finds lost clusters (a cluster is
a group of an integral power of two sectors, and is the smallest unit of
disk space that can be allocated; a lost cluster is a cluster which is not
part of a file and which is also not part of the free space on the disk ...
in other words, it's in limbo). Each time it finds a chain of lost clusters,
it creates another file out of it. When you delete these files, the formerly
lost space once again becomes available.
Hope this helps!
--
Stephen M. Dunn cs4g6ag@maccs.dcss.mcmaster.ca
<std_disclaimer.h> = "\nI'm only an undergraduate!!!\n";
****************************************************************************
If it's true that love is only a game//Well, then I can play pretend
leonard@bucket.UUCP (Leonard Erickson) (01/07/90)
rexw@hpvcfs1.HP.COM (Rex Wickenkamp) writes: >2) I have always wondered why CHKDSK, when used with the /F parameter, > creates FILE0000.CHK files in the root directory. Can anyone give me > a good explanation of this? It made them because you just *told* it to! Among other things, CHKDSK checks for "lost clusters". These are clusters or chains of clusters that are marked as "in use" in the FAT, but are not assigned to any entries in any directory. When it finds such clusters it asks "Convert lost clusters to files?" If you are running chkdsk without the /F parameter it will ignore your response as it *can't* do anything about them. If you did specify /F and answer No, it will mark the lost clusters as unused thus freeing up the space. If you answer Yes, it will convert each chain of clusters into a FILExxxx.CHK file in the root directory. In theory, this gives you a chance to recover the data. In practice it isn't much help. -- Leonard Erickson ...!tektronix!reed!percival!bucket!leonard CIS: [70465,203] "I'm all in favor of keeping dangerous weapons out of the hands of fools. Let's start with typewriters." -- Solomon Short
pipkins@qmsseq.imagen.com (Jeff Pipkins) (01/09/90)
In article <21990002@hpvcfs1.HP.COM> rexw@hpvcfs1.HP.COM (Rex Wickenkamp) writes: >1) I have access to a Compaq 386/20e. This Compaq seems to have a problem > when exiting out of applications, as it will show an empty screen for > approximately 15 seconds, before returning to the DOS prompt. Does > anyone have any ideas what I can use to check this puppy out? The command interpreter, COMMAND.COM, has three parts: initialization, resident, and transient. The resident part consumes a block of memory in low memory, above MS-DOS proper (if you can use that word with DOS ;->) and just above user-installed device drivers. The transient part is loaded in high memory that is NOT allocated at all! It is loaded in free memory! Applications are loaded above the resident part, and may subsequently allocate and overwrite the transient part. When the app. terminates, control returns to the resident part, which then does a simple checksum on the transient part to determine if it is still there. If the checksum works, control jumps into the transient part. Otherwise, it is reloaded from disk. This scheme is supposed to save memory (at the expense of time), but also can be a major contributing factor to the mysterious unreliability of MS-DOS. Any application program can legitimately allocate memory, swap any two bytes that just happens to be in the transient part of COMMAND.COM, and exit. Since a checksum is used instead of a CRC check, the checksum will be the same. In fact it is a simple matter for a virus or trojan to insert whatever code it wants followed by a couple of bytes to compensate for the checksum. The 15-second wait that you are experiencing is most likely the time it is taking for DOS to reload the transient part of COMMAND.COM. It should be obvious that this is entirely too long. Here is my guess as to why: COMMAND.COM may be occupying a marginally bad area of the hard disk. Every time a sector is read, a CRC check is done, and if it is bad, a certain number of retry-reads will be performed. The retry usually involves a reseek and always involves another full rotation of the disk. Use something like Norton's dt (disk test) with the file check option to check it. If you don't have that, use the DOS command VERIFY=ON and then copy COMMAND.COM to NUL and see if you get errors or if it takes a long time. Hope this helps. Anyone else care to hazard a guess? (Opinions expressed here are mine and do not necessarily reflect those of my employer, etc. "I can neither confirm nor deny the presence of nuclear weapons on this vessel" - U.S. Navy)
conway@hpdtl.HP.COM (Daniel F. Conway) (01/10/90)
/ hpdtl:comp.sys.ibm.pc / brown@vidiot.UUCP (Vidiot) / 11:09 am Jan 5, 1990 /
<In article <21990002@hpvcfs1.HP.COM> rexw@hpvcfs1.HP.COM (Rex Wickenkamp) writes:
<<2) I have always wondered why CHKDSK, when used with the /F parameter,
<< creates FILE0000.CHK files in the root directory. Can anyone give me
<< a good explanation of this?
<
<Before you ask how files can end up not having a directory
<entry, but still have space allocated in the FAT, I will try and answer that.
<The answer is: I don't know. I have seen programs screw up, like compilers,
<that somehow leave FAT entries there but, the directory entries are gone.
The easiest way I know of for this to happen is to have a program open a file
for output, and then end without closing it. This will happen if, for
instance, you must kill a runaway program that has files open.
Comments about the level of robustness of the MS-DOS filesystem have been
deliberately omitted. :-(
<--
< harvard\ att!nicmad\ cs.wisc.edu!astroatc!vidiot!brown
<Vidiot ucbvax!uwvax..........!astroatc!vidiot!brown
< rutgers/ decvax!nicmad/ INTERNET:<@cs.wisc.edu,@astroatc:brown@vidiot>
<----------
Dan Conway
dan_conway@hplabs.hp.com
emmo@moncam.co.uk (Dave Emmerson) (01/11/90)
> $2) I have always wondered why CHKDSK, when used with the /F parameter, > $ creates FILE0000.CHK files in the root directory. Can anyone give me > $ a good explanation of this? > Fine, I guess that anybody who hasn't grasped that those are the 'lost chains made accessible' by now is never going to. But nobody has explained how they could get 'lost' in the first place. Computers aren't supposed to 'lose' things, or so I was told, this was part of their raison d'etre! -But then the same people told me that RS232c was intended to provide a standard! How many flavours have you tasted? Ha! Dave E.
bcw@rti.UUCP (Bruce Wright) (01/11/90)
In article <355@marvin.moncam.co.uk>, emmo@moncam.co.uk (Dave Emmerson) writes: > > $2) I have always wondered why CHKDSK, when used with the /F parameter, > > $ creates FILE0000.CHK files in the root directory. Can anyone give me > > $ a good explanation of this? > > > > Fine, I guess that anybody who hasn't grasped that those are the 'lost > chains made accessible' by now is never going to. But nobody has explained > how they could get 'lost' in the first place. The usual way is that the computer got rebooted while there were files open. Could be caused by any number of things: power failure, hardware problems, program bugs, user error (rebooting at inopportune times in the program). This sort of problem tends to affect programs that use the old FCB format file I/O calls more than those that use the more modern "handle" file I/O calls, though it's possible with either one under the right (wrong?) circumstances. More rarely, the chains get lost because something corrupted the disk directory structure. For example, a directory might be clobbered by a bug in a utility program (like the Norton Utilities, not to say that they have such a bug, but that they are an example of the type of program that might cause this kind of problem if they had a bug in them). Or the disk might develop a bad spot which renders part of the directory structure unreadable, and hence the clusters in the corresponding files unavailable until chkdsk is run. There are probably other ways this sort of corruption can occur but this gives the general idea ... Bruce C. Wright
dmurdoch@watstat.waterloo.edu (Duncan Murdoch) (01/11/90)
In article <355@marvin.moncam.co.uk> emmo@moncam.co.uk (Dave Emmerson) writes: >Fine, I guess that anybody who hasn't grasped that those are the 'lost >chains made accessible' by now is never going to. But nobody has explained >how they could get 'lost' in the first place. The easiest way to lose chains is to open a file, write to it, and then crash the system before you close it. When you open it you create a size 0 directory entry with no clusters. When you write to it you use up clusters on the disk and the FAT is kept up to date, so the chain gets created. It's not until you close it that the link is made from the directory to the chain. (This is my experience in MSDOS 3.2 and 3.3; I don't know about other versions.) A more exotic way to create lots and lots of lost chains is to trash a directory or subdirectory. The most common way I've heard of to do this is to switch floppy disks during the Abort, Retry, Ignore? message, but there are probably others, such as hardware errors losing sectors on a hard disk, etc. One last way is to believe people who tell you that all that you need to do to remove a file is to edit the directory entry using Norton or similar program. >Computers aren't supposed to 'lose' things, or so I was told, this was >part of their raison d'etre! Filing cabinets don't lose things either, but I've got stuff in mine that I can't find :-). Duncan Murdoch
tcm@srhqla.SR.COM (Tim Meighan) (01/12/90)
In article <355@marvin.moncam.co.uk> emmo@moncam.co.uk (Dave Emmerson) writes: >Fine, I guess that anybody who hasn't grasped that those are the 'lost >chains made accessible' by now is never going to. But nobody has explained >how they could get 'lost' in the first place. >Computers aren't supposed to 'lose' things, or so I was told, this was >part of their raison d'etre! Good point, Dave. However, there are lots of ways that chains can get "lost." These include: * Poorly written application programs. * External forces that the file system cannot do anything about (ie, you reach over and turn off your computer while it is writing to disk.) * Deficiencies in the file system itself. Which, of course, means that even though computers aren't supposed to "lose" things, they are only as perfect as the people who design them. Which means, of course, that they are not going to be perfect. The point is, stuff breaks and it's a good idea to have the tools around to fix them. Tim Meighan Silent Radio
barton@holston.UUCP (Barton A. Fisk) (01/12/90)
In article <3404@rti.UUCP>, bcw@rti.UUCP (Bruce Wright) writes: > In article <355@marvin.moncam.co.uk>, emmo@moncam.co.uk (Dave Emmerson) writes: > > The usual way is that the computer got rebooted while there were files open. > Could be caused by any number of things: power failure, hardware problems, > program bugs, user error (rebooting at inopportune times in the program). > I have found that using ^C to abort a program will sometimes do this also. But how else to you get out of a dead-lock other that Ctrl-Alt-Del. Perhaps DOS is not as well behaved as some would lead us to believe. -- Barton A. Fisk | UUCP: {attctc,texbell}vector!holston!barton PO Box 1781 | (PSEUDO) DOMAIN: barton@holston.UUCP Lake Charles, La. 70602 | ---------------------------------------- 318-439-5984 | "Let him who is without sin cast the first stone"-JC
Ralf.Brown@B.GP.CS.CMU.EDU (01/13/90)
In article <13500004@hpdtl.HP.COM>, conway@hpdtl.HP.COM (Daniel F. Conway) wrote: } [lost clusters] }The easiest way I know of for this to happen is to have a program open a file }for output, and then end without closing it. This will happen if, for }instance, you must kill a runaway program that has files open. This is much less of a problem under DOS 3.x, since it closes all open files when one of the non-TSR program exit functions is called. DESQview is also nice enough to close all open files when you kill a window. -- UUCP: {ucbvax,harvard}!cs.cmu.edu!ralf -=- 412-268-3053 (school) -=- FAX: ask ARPA: ralf@cs.cmu.edu BIT: ralf%cs.cmu.edu@CMUCCVMA FIDO: Ralf Brown 1:129/46 "How to Prove It" by Dana Angluin Disclaimer? I claimed something? 14. proof by importance: A large body of useful consequences all follow from the proposition in question.
leonard@bucket.UUCP (Leonard Erickson) (01/15/90)
barton@holston.UUCP (Barton A. Fisk) writes: >In article <3404@rti.UUCP>, bcw@rti.UUCP (Bruce Wright) writes: >> In article <355@marvin.moncam.co.uk>, emmo@moncam.co.uk (Dave Emmerson) writes: >> The usual way is that the computer got rebooted while there were files open. >> Could be caused by any number of things: power failure, hardware problems, >> program bugs, user error (rebooting at inopportune times in the program). >I have found that using ^C to abort a program will sometimes >do this also. But how else to you get out of a dead-lock other >that Ctrl-Alt-Del. Perhaps DOS is not as well behaved as some >would lead us to believe. Actually, the problem isn't DOS in most cases. It is the application programs. As an example, dBase will cheerfully allocate new clusters to a file, but doen't update the filesize entry in the directory until you close the file. The improves performance at the expense of safety. If anything goes wrong, the clusters (and the new data) are lost. Worse, dBase does this by maintaining a copy of the directory in RAM and updating it. As many a dBase user has learned to his sorrow, forgetting to close a file before swapping disks will not just corrupt that one file, but will result in a large portion of the directory of the new disk being replaced by the directory of the old disk! Bleah! dBase isn't the only program to be mis-optimized this way. As any of the users I've had to to explain this to (*after* they blew away their data) would tell you, they'd rather have it be slower, but do it *safely*. -- Leonard Erickson ...!tektronix!reed!percival!bucket!leonard CIS: [70465,203] "I'm all in favor of keeping dangerous weapons out of the hands of fools. Let's start with typewriters." -- Solomon Short
Ralf.Brown@B.GP.CS.CMU.EDU (01/17/90)
In article <1919@bucket.UUCP>, leonard@bucket.UUCP (Leonard Erickson) wrote: }barton@holston.UUCP (Barton A. Fisk) writes: }>I have found that using ^C to abort a program will sometimes }>do this [lost clusters] also. But how else to you get out of a dead-lock other }>that Ctrl-Alt-Del. Perhaps DOS is not as well behaved as some }>would lead us to believe. } }Actually, the problem isn't DOS in most cases. It is the application programs. } }As an example, dBase will cheerfully allocate new clusters to a file, but }doen't update the filesize entry in the directory until you close the file. }The improves performance at the expense of safety. If anything goes wrong, }the clusters (and the new data) are lost. That is in fact DOS. DOS will not update the directory entry for a file until it is closed. It wasn't until DOS 3.3 that a call was implemented to force DOS to update the disk (though there is a way to trick DOS 2.0 and up to update the disk by closing the file without really closing it). }Worse, dBase does this by maintaining a copy of the directory in RAM and }updating it. As many a dBase user has learned to his sorrow, forgetting }to close a file before swapping disks will not just corrupt that one file, }but will result in a large portion of the directory of the new disk }being replaced by the directory of the old disk! Bleah! Again, this is DOS's doing. Any program which opens a file is subject to this corruption if the floppy is swapped while the file is open. I scrambled a number of floppies because ProComm 2.4.2's overlay manager keeps the overlay file open for the entire duration of the program's execution, effectively requiring you to keep the program disk in the drive at all times. -- UUCP: {ucbvax,harvard}!cs.cmu.edu!ralf -=- 412-268-3053 (school) -=- FAX: ask ARPA: ralf@cs.cmu.edu BIT: ralf%cs.cmu.edu@CMUCCVMA FIDO: Ralf Brown 1:129/46 "How to Prove It" by Dana Angluin Disclaimer? I claimed something? 14. proof by importance: A large body of useful consequences all follow from the proposition in question.