wfp@dasys1.UUCP (William Phillips) (02/13/88)
In article <567@naucse.UUCP>, wew@naucse.UUCP (Bill Wilson) writes: > Turbo C is superior to Quick C. On our campus here we have also > had Quick C blow away hard drives, so be careful. I know of a case where MSC (4.0 I think) utterly scrambled a hard drive (not backed up, natch), when a module compiled with one memory model was linked with modules compiled with a different model. I've forgotten the exact details, but I saw what was left of the directories after the program was run -- total garbage. So watch out for that one! -- William Phillips {allegra,philabs,cmcl2}!phri\ Big Electric Cat Public Unix {bellcore,cmcl2}!cucard!dasys1!wfp New York, NY, USA (-: Just say "NO" to OS/2! :-)
dick@slvblc.UUCP (Dick Flanagan) (02/14/88)
In article <5441@cit-vax.Caltech.Edu> tim@cit-vax.Caltech.Edu (Timothy L. Kay) writes: >In article <2946@dasys1.UUCP> wfp@dasys1.UUCP (William Phillips) writes: >>In article <567@naucse.UUCP>, wew@naucse.UUCP (Bill Wilson) writes: >>> Turbo C is superior to Quick C. On our campus here we have also >>> had Quick C blow away hard drives, so be careful. >> >>I know of a case where MSC (4.0 I think) utterly scrambled a hard drive >>(not backed up, natch), when a module compiled with one memory model was >>linked with modules compiled with a different model. > >This can happen with *any* C compiler that uses the large model. . . . >If you break the rules and link large model with small model, who knows >what is going to happen? . . . > >The only solution is to stick to small models or get some memory protection ^^^^ An alternative solution is to simply be careful and not break the rules. DOS has always been very unforgiving in this area (it's as much a victim of the Intel architecture as we are), and blaming any compiler because it "lets" us screw up is a bit tacky. (Flame != Tim) Dick -- Dick Flanagan, W6OLD GEnie: FLANAGAN UUCP: ...!ucbvax!ucscc!slvblc!dick Voice: +1 408 336 3481 INTERNET: slvblc!dick@ucscc.UCSC.EDU LORAN: N037 05.5 W122 05.2 USPO: PO Box 155, Ben Lomond, CA 95005
tim@cit-vax.Caltech.Edu (Timothy L. Kay) (02/15/88)
In article <2946@dasys1.UUCP> wfp@dasys1.UUCP (William Phillips) writes: >In article <567@naucse.UUCP>, wew@naucse.UUCP (Bill Wilson) writes: >> Turbo C is superior to Quick C. On our campus here we have also >> had Quick C blow away hard drives, so be careful. > >I know of a case where MSC (4.0 I think) utterly scrambled a hard drive >(not backed up, natch), when a module compiled with one memory model was >linked with modules compiled with a different model. I've forgotten the This can happen with *any* C compiler that uses the large model. Once you allow the compiler to generate code that messes with the segment registers, you open yourself up for much more destructive bugs. If you break the rules and link large model with small model, who knows what is going to happen? Suppose the result is that your bug scribbles all over low memory, exactly where some sectors from the FAT happens to be. Then the program bombs. Your next disk write will scribble in random places on the disk because the in-core copy of the FAT sectors has been clobbered. The only solution is to stick to small models or get some memory protection ala Unix/386. Tim
jrv@siemens.UUCP (James R Vallino) (02/15/88)
In article <2946@dasys1.UUCP> wfp@dasys1.UUCP (William Phillips) writes: >I know of a case where MSC (4.0 I think) utterly scrambled a hard drive >(not backed up, natch), when a module compiled with one memory model was >linked with modules compiled with a different model. I've forgotten the >exact details, but I saw what was left of the directories after the >program was run -- total garbage. So watch out for that one! > To be somewhat fair to Microsoft, especially since the "exact details" were forgotten, this type of thing can easily occur with ANY C program which uses large data, that is segment:offset pointers. You can get it to happen even when you link the correct library. The problem is with the hardware and operating system (or in this case the programmer) not the compiler. A program running under MS-DOS can get to any memory location in the 1 MByte memory space in which MS-DOS runs. This includes overwriting critical sections of the operating system, such as the file allocation table (FAT). Once this is trashed and a program does another write to disk then you better hope that the Gods are on your side. The situation which seemed to cause this most frequently was inexperienced C programmers writing code which ended up using pointers which were NULL. These then read from low memory and got garbage other data which ended up trashing the FAT. My boss was not pleased when it was his system that got trashed the first time this problem surfaced. So if you want to blame a compiler then blame ALL the C compilers for not providing an option to check for use of NULL pointers at runtime. (The MSC V5.0 compiler is the first C compiler I have ever worked with which does have this option.) -- Jim Vallino Siemens Research and Technology Lab.,Princeton, NJ CSNET: jrv@siemens.siemens-rtl.com UUCP: {ihnp4,philabs,seismo}!princeton!siemens!jrv
dave@westmark.UUCP (Dave Levenson) (02/21/88)
In article <443@siemens.UUCP>, jrv@siemens.UUCP (James R Vallino) writes: > ...So if you want to blame a compiler then > blame ALL the C compilers for not providing an option to check for use of > NULL pointers at runtime. (The MSC V5.0 compiler is the first C compiler I > have ever worked with which does have this option.) MS-C release 3.0 and 4.0 also check for NULL pointer references at runtime. They do this by loading a 16-byte constant (actually, their copyright notice) into the first sixteen bytes of the data segment. This is checked against another copy before and after the execution of your program. If they don't compare, the runtime package displays the nessage "NULL POINTER REFERENCE" as your program exits. If you also managed to munge the operating system, however, this check probably won't save you! -- Dave Levenson Westmark, Inc. A node for news. Warren, NJ USA {rutgers | clyde | mtune | ihnp4}!westmark!dave
nelson@sun.soe.clarkson.edu (Russ Nelson) (02/22/88)
In article <112@westmark.UUCP> dave@westmark.UUCP (Dave Levenson) writes: >MS-C release 3.0 and 4.0 also check for NULL pointer references at >runtime. They do this by [checking..] the first sixteen bytes of the data >segment. If they don't compare, the runtime package displays the >message "NULL POINTER REFERENCE" as your program exits. Turbo C does this also. Perhaps what the original poster meant was that MS-C 5.0 checks *every* pointer use for the null pointer. -- -russ AT&T: (315)268-6591 BITNET: NELSON@CLUTX Internet: nelson@clutx.clarkson.edu GEnie: BH01 Compu$erve: 70441,205
knop@dutesta.UUCP (Peter Knoppers) (02/22/88)
Why, oh why don't the .obj files in MSDOS contain some bits telling the linker whether a function in the .obj file expects to be called with a FAR or a NEAR call. This can prevent accidentally linking modules compiled for different models. -- Peter Knoppers, Delft Univ. of Technology ...!mcvax!dutrun!dutesta!knop or knop@dutesta.UUCP
wes@obie.UUCP (Barnacle Wes) (02/23/88)
In article <2946@dasys1.UUCP>, wfp@dasys1.UUCP (William Phillips) writes: > In article <567@naucse.UUCP>, wew@naucse.UUCP (Bill Wilson) writes: > > Turbo C is superior to Quick C. At filling my garbage can, yes. I took a program that I wrote on Unix (MicroPort System V/AT if it makes any difference to you) and ported it to the Atari ST using Mark Williams C. I kermitted over the data file, the program ran fine - the same as Unix. Then I ported the program to MS-DOS using Turbo C. I kermitted over the data file, the output was meaningless garbage. I #defined the debugging code to find out what was happening to the data buffers (no Turbo Debugger, of course). The debugging code did not work the way it did under Unix either. I got mad and pitched TC into the garbage (from across the room, accompanied by great noises). I bought Quick C. I compiled the code without the debugging defines. It ran just like Unix. Hmmm... If it runs on iAPX286 'pcc', MWC on the 68000, and on Quick C on 286, what's wrong with Turbo? > > On our campus here we have also > > had Quick C blow away hard drives, so be careful. > > I know of a case where MSC (4.0 I think) utterly scrambled a hard drive Yah, Microsoft said something about MSC 5.0 blowing away hard disks on PCs and XTs using Western Digital WX2 series disk controllers. I guess Microsoft beta-testers don't stoop to using mere 8088's any more.... (I can't blame them, I wouldn't either). -- /\ - "Against Stupidity, - {backbones}! /\/\ . /\ - The Gods Themselves - utah-cs!utah-gr! / \/ \/\/ \ - Contend in Vain." - uplherc!sp7040! / U i n T e c h \ - Schiller - obie!wes
mccarthy@well.UUCP (Patrick McCarthy) (02/25/88)
In article 7731 of comp.lang.c knop@dutesta.UUCP (Peter Knoppers) writes: > Why, oh why don't the .obj files in MSDOS contain some bits telling > the linker whether a function in the .obj file expects to be called > with a FAR or a NEAR call. This can prevent accidentally linking > modules compiled for different models. Fortunately, most of the time this will result in fixup overflow errors at link time. Using make is an excellent way to avoid this problem altogether (though many consider it a major pain for single module programs). Pat McCarthy Reply: mccarthy@well.UUCP
terry@wsccs.UUCP (terry) (03/12/88)
In article <1082@dutesta.UUCP>, knop@dutesta.UUCP (Peter Knoppers) writes: > > Why, oh why don't the .obj files in MSDOS contain some bits telling > the linker whether a function in the .obj file expects to be called > with a FAR or a NEAR call. This can prevent accidentally linking > modules compiled for different models. Because there is no way to set up a function such that if you call it 'NEAR' at such-and-such an address, you can still call it 'FAR' at a prior address. The push's end up in the wrong order for near and far calls. There was a very good column on this in 'The Devil's advocate', a column carried by several magazines, I think, and written by Stan Kelly Bootle (sp?). I think that fact that his Porshe liscence plate is 'MC68000' has nothing to do with this dim view. Of course, the linker COULD do this, IF it were to generate code, but linkers are not supposed to have to do that (if you can call what MSC uses a linker :-( ). Barring generation of appropriate code by the linker, which would have to include translation of some previous code generated by the compiler, such as _calls_ ;-), external refs, and so forth, as well as segment crossing code requiring different instructions in large model, you can't do it. Even if you did, you 'small model' code would be using JMP's and so forth with 16 bit addressing, rather than the 8 normally needed in small model. You should get a 'error: mixed code models' anyway, if you are using a *real* compiler and linker... terry@wsccs
jru@etn-rad.UUCP (John Unekis) (03/16/88)
In article <304@wsccs.UUCP> terry@wsccs.UUCP (terry) writes: >In article <1082@dutesta.UUCP>, knop@dutesta.UUCP (Peter Knoppers) writes: >> >> Why, oh why don't the .obj files in MSDOS contain some bits telling >> the linker whether a function in the .obj file expects to be called >> with a FAR or a NEAR call. This can prevent accidentally linking >> modules compiled for different models. ... The problem is not with the linker, but with the compiler. The code which performs a subroutine call is generated at compile time, and it either pushes the segment registers on the stack (FAR call) with the offset registers, or it simply pushes the offset registers alone (NEAR call). The problem with labelling entry points for the linker is that an entry point may be NEAR when placed in the same segment with the calling code by the linker, or FAR when it has to be placed in a separate segment due to code size. Therefore a lot of large model FAR subroutine calls are redundant because the code actually fits in one segment (but there are no guarantees). If all this gets too frustrating for words, you could always buy a computer with a real processor chip(like the 68000) that doesn't have 20 year old START-OF-THE-ART problems like segment boundaries. ========================================================================= My opinions are so good that my employer would like to own them, but they remain mine alone. {ihnp4 or voder}!wlbr!etn-rad!jru =========================================================================
gof@crash.cts.com (Jerry Fountain) (03/16/88)
In article <304@wsccs.UUCP> terry@wsccs.UUCP (terry) writes: >In article <1082@dutesta.UUCP>, knop@dutesta.UUCP (Peter Knoppers) writes: >> >> Why, oh why don't the .obj files in MSDOS contain some bits telling >> the linker whether a function in the .obj file expects to be called >> with a FAR or a NEAR call. This can prevent accidentally linking >> modules compiled for different models. > > > [lots more stuff deleted that don't really apply to this comment :-] > Digging around in the .obj file I did run across an interesting item. MSC *does* include a flag in the .obj to indicate the model for which the file was generated. The bad part is that only placed in the .obj for Xenix compatibility. (from MS-DOS encyclopedia, pg. 659) COMENT record (0x88), comment class 0x9D Hope this clears up this minor point...The info is there, LINK just ignores it. -- -----Jerry Fountain----- UUCP: {hplabs!hp-sdd,sdcsvax,nosc}!crash!pnet01!gof ARPA: crash!gof@nosc.mil MAIL: 523 Glen Oaks Dr., Alpine, Calif. 92001 INET: gof@pnet01.CTS.COM
wfp@dasys1.UUCP (William Phillips) (03/24/88)
In article <304@wsccs.UUCP> terry@wsccs.UUCP (terry) writes: > You should get a 'error: mixed code models' anyway, if you are using >a *real* compiler and linker... Precisely. -- William Phillips {allegra,philabs,cmcl2}!phri\ Big Electric Cat Public Unix {bellcore,cmcl2}!cucard!dasys1!wfp New York, NY, USA !!! JUST SAY "NO" TO OS/2 !!!