davidi@well.sf.ca.us (David Intersimone) (09/01/90)
In the message with C++ compilers for MSDOS review: >Article 8958 of 8974, Sat 04:13. >Subject: C++s for MSDOS - review >From: SRWMRBD@windy.dsir.govt.nz (ROBERT) >Organization: DSIR, Wellington, New Zealand >Date: 11 Aug 90 11:13:55 GMT >This is a comparison of 3 C++s for MS DOS. >The compilers are > Glockenspiel C++ version 2.00a running with Microsoft C version 5.1; > Zortech C++ version 2.06; > Turbo C++ version 1.00. I got a message from Robert and he had debug information turned on when he determined the size of TC++'s .exe files. that is why TC++'s .exe files are so large. if you have the professional package or a copy of our Turbo Debugger product you can use the TDSTRIP.EXE utility to get rid of the debug info without having to relink. Turbo C++ is both an ANSI C and C++ 2.0 native code compiler. In order to pack the compiler into the 640k of DOS we use our VROOMM technology to dynamically load code segments on demand. If you have expanded (EMS) or extended (EMM) memory the compile times can be improved. if you use the programmers platform specify the command line switches /e and/or /x (you can also use the TCINST program to set those options up). the compiler will use both types of memory to cache the code segments of the compiler. If you use the command line compiler specify -Qe (for expanded memory) and -Qx (for extended memory) to have the compiler cache its code segments. Also for anyone reading this message, if you are going to compare sizes of generated .obj files - you have to look into the object file itself to see how much code and data was generated. There are lots of comment records in the .obj file for debugging, path and timestamp of all header files (autodependencies) used, and other information for the linker. You can use our TDUMP.EXE utility that comes with the professional package or the debugger and tools product to dump out the sizes of code segments and data segments for .obj and .exe files. david intersimone ("davidi") director, developer relations borland davidi@well.sf.ca.us
John.Passaniti@f201.n260.z1.FIDONET.ORG (John Passaniti) (09/02/90)
> Also for anyone reading this message, if you are going > to compare sizes of generated .obj files - you have to > look into the object file itself to see how much code > and data was generated. There are lots of comment records > in the .obj file for debugging, path and timestamp of all > header files (autodependencies) used, and other information > for the linker. You can use our TDUMP.EXE utility that comes > with the professional package or the debugger and tools > product to dump out the sizes of code segments and data > segments for .obj and .exe files. > > david intersimone ("davidi") > director, developer relations > borland > davidi@well.sf.ca.us [This may have been discussed here in comp.lang.c++ before, but I just joined the newsgroup thanks to a local Fidonet gateway.] Something I have been curious about is if Borland pays attention to some of the hobbyist networks, such as Fidonet. Recently in the C_ECHO conference, there has been discussion about the quality of code generated from Turbo C++. I won't repost the entire message here, but the basic idea was that it appears as if Turbo C++ was derived from a compiler designed for a different processor than the Intel 80x86 family. The author claimed it looked as if the compiler was generating code using a Motorola mentality. He showed how a compiler for a 680x0 might optimize things, and then showed how the code generated from Turbo C++ showed those same kind of decisions. Another message dealt with a bug found in the code generated for switch statements. It had to do with unsigned values not being compared in the right sequence, giving rise to subtle errors. I've kept these messages and would be happy to forward them to you. But my real question is if Borland pays any attention to hobbyist networks like Fidonet, or even to Usenet.