[comp.sys.ibm.pc.programmer] Looking for experience with big project upgrades from MSC 5.1 to 6.0

pete@Octopus.COM (Pete Holzmann) (06/13/90)

I'm working on a very large commercial software project using MSC 5.1 and
some assembler. It is complex, huge, hairy, etc. Doesn't use Windows,
but does use graphics, interrupts, EMS, lots and lots of overlays, etc.

We're contemplating an upgrade to MSC 6.0, hoping for (a) smaller code;
(b) faster code; (c) better operation of CodeView under MagicCV; (d) fewer
bugs in the MSC compiler. Most especially (a). We have source code for
everything but our graphics library (GSS*CGI), which is hopefully not a
problem.

Has anybody out there been through (or most of the way through) an upgrade
of a big pile of code from 5.1 to 6.0? If so, I (and probably others in
this newsgroup) would love to benefit from your experience! If somebody
from MicroSoft cares to comment on internal product upgrade experience,
or to summarize feedback from customers, that would be great too...

    1. In general, did it go very smoothly? (i.e. no gotcha's vs. so many
	gotcha's we cried for a month... :-))

    2. What compiler switches did you end up using in order to have code
	that always runs ok without tweaking things everywhere? (Or did
	you end up tweaking everywhere?)

    3. What kind of global tweaks were required? (i.e. under MSC 5.1, you
	must make sure that any function definitions in an include file do
	not span more than one line, or you'll break CV...)

    4. What kind of hassles have you had with auxilliary tools (linkers,
	debuggers, etc.)? Does everything still work ok?

    5. Do you have a general idea of code space saved, data space saved,
	speed improvements? (Probably also in this category: rumor has it
	that constants can be/are now compiled into the local code segment,
	which would be a big win in an overlaid architecture; is this true?)

    6. Ignoring the fact that in general you've got to upgrade your tools
	eventually, was the benefit from upgrading more valuable than the
	hassle, or was it more trouble than it was worth? (i.e., if you had
	it to do over, would you wait if you could, maybe for 6.1, or even
	switch to a different compiler :-))?

*Thank* you!

Pete
-- 
Peter Holzmann, Octopus Enterprises   |(if you're a techie Christian & are
19611 La Mar Ct., Cupertino, CA 95014 |interested in helping w/ the Great
UUCP: {hpda,pyramid}!octopus!pete     |Commission, email dsa-contact@octopus)
DSA office ans mach=408/996-7746;Work (SLP) voice=408/985-7400,FAX=408/985-0859

kdq@demott.COM (Kevin D. Quitt) (06/14/90)

In article <1990Jun13.132843.13966@Octopus.COM> pete@Octopus.COM (Pete Holzmann) writes:
>I'm working on a very large commercial software project using MSC 5.1 and
>some assembler. It is complex, huge, hairy, etc. Doesn't use Windows,
>but does use graphics, interrupts, EMS, lots and lots of overlays, etc.
>
...
>    1. In general, did it go very smoothly? (i.e. no gotcha's vs. so many
>	gotcha's we cried for a month... :-))

    It went smoother for us than expected.  We kept both compilers around,
but within a week we discarded 5.1.

>    2. What compiler switches did you end up using in order to have code
>	that always runs ok without tweaking things everywhere? (Or did
>	you end up tweaking everywhere?)

    Generally speaking, we use /Ox /Gs, and where speed is a requirement
we use /Gsr - there are major gains to be had here, and only minor
annoyances to get them.  The r indicates to the compiler to use a much
faster (albeit non-standard) calling convention, except for functions
explicity declared as cdecl.  In heavy subroutine oriented code, we get
a gain of 10 to 40% in speed.

    For those (unfortunately fairly common) functions that are broken by
loop optimiation, we were alreasy using #pragmas in the file, so have
not been affected.  We do seem to require fewer of them, though.

    Another nice feature is that 6.0 will put debugging info into a file
compiled with optimization.


>    3. What kind of global tweaks were required? (i.e. under MSC 5.1, you
>	must make sure that any function definitions in an include file do
>	not span more than one line, or you'll break CV...)

    Maybe we're just lucky, but we haven't observed any new ones.


>
>    4. What kind of hassles have you had with auxilliary tools (linkers,
>	debuggers, etc.)? Does everything still work ok?

    Codeview's a lot nicer, it automatically remembers most of your
context from the previous session.  I'm using it without a mouse, so
it's not quite so convenient as it might be.  One problem we did have
was the failure of a breakpoint to happen when a line was executed. 
That was finally traced down to a compiler problem - the NOP used to
even a RET was flagged as the first byte of the next line, so that CV
made the NOP the breakpoint.  Of course, all the linked code jumped to
the next instruction.

    We don't use the PWB though.  We use Epsilon - it's a better editor, and
does a better job of make.  When we have time to decode the help files, 
we'll add the context sensitive help the PWB has.


>    5. Do you have a general idea of code space saved, data space saved,
>	speed improvements? (Probably also in this category: rumor has it
>	that constants can be/are now compiled into the local code segment,
>	which would be a big win in an overlaid architecture; is this true?)

    We don't compile for size, since we spawn and exec all over the place,
but we do get 10-40% better speed.


>    6. Ignoring the fact that in general you've got to upgrade your tools
>	eventually, was the benefit from upgrading more valuable than the
>	hassle, or was it more trouble than it was worth? (i.e., if you had
>	it to do over, would you wait if you could, maybe for 6.1, or even
>	switch to a different compiler :-))?

    We were having problems meeting real-time requirements until we
switched to 6.0. 

-- 
 _
Kevin D. Quitt         Manager, Software Development    34 12 N  118 27 W
DeMott Electronics Co. 14707 Keswick St.   Van Nuys, CA 91405-1266
VOICE (818) 988-4975   FAX (818) 997-1190  
MODEM (818) 997-4496 Telebit PEP last      demott!kdq   kdq@demott.com

      96.37% of the statistics used in arguments are made up.

west@turing.toronto.edu (Tom West) (06/14/90)

Pete Holzmann writes:
>Has anybody out there been through (or most of the way through) an upgrade
>of a big pile of code from 5.1 to 6.0?

  I have been involved in porting a 300k program from MSC 5.1 to MSC 6.0  My
experience has been middling (it took longer than I expected).  Unfortunately,
I have had a major problem that has put my upgrade project on hold.  
Specifically, MSC 6.0 generates bad code under certain circumstances under
what are considered "safe" optimizations.  This means that one can never be
certain that one's program is reliable unless you have an exhaustive test
suite.
 
>    1. In general, did it go very smoothly? (i.e. no gotcha's vs. so many
>	gotcha's we cried for a month... :-))

  In my case, I had a bit of assembler.  I was required to either rewrite
the assembler or revise the declarations of the assembler routine to be
_cdecl.  (The -Gr fastcall convention requires this.)  Also, if you have 
routines that redeclare built in routines, this too must be changed.  The
redeclaration of the built-in routines (strcpy, etc.) are almost
certainly different from the ones included in string.h and so on.  This
is again, because all built-in routines are declared _cdecl.  For me,
this was all a bit of a problem, because I was working with C that had
benn generated by another program.  Thus fiddling with the C code was not
very pleasant.  Eventually, I solved this by writing a sed script to delete
all built-in routine declarations in the C code, and include my own .h
file at the beginning of the program that included all the regular .h files
I might need.

>    2. What compiler switches did you end up using in order to have code
>	that always runs ok without tweaking things everywhere? (Or did
>	you end up tweaking everywhere?)

  I am optimizing for size and use /Osleazr /Gsr normally.  However, I had
a number of fatal compiler errors that caused me to use slightly tweaked
options.  Most annoyingly of all, I have a large library that must be compiled
-Alfu.  Unfortunately, if you have a double routine call a double routine with
that memory option, the compiler crashes.  This is annoying because there is
no option I can turn off to correct this.  I have temporarily solved the 
problem by declaring the routines in questions -AL.  This means that my
library routines may not work under certain circumstances, but I know about
this.
  Unfortunately, I have come across a case where MSC 6.0 generates bad code.
(See my previous post)  This means that I don't really dare use it!  I have
no way of knowing if the code produced is suitable to be shipped to our sites!
The MS Product Support people has been sympathetic and they are sending my
bug reports to the dev team, but there is not a lot they can do (especially 
in Canada).

>    3. What kind of global tweaks were required? (i.e. under MSC 5.1, you
>	must make sure that any function definitions in an include file do
>	not span more than one line, or you'll break CV...)

In order to do the optimizations correctly or at all well, you must give
MSC as much space as possible.  Prefereably around the 600k mark.  This means
that we had to abandon our regular development system of having all the source
on a SUN, with PC-NFS on the PC.  Instead, we had to move everything down to
the PC and eliminate all TSRs.  Note that if you don't do this, you won't 
get much in the way of warnings, but your program may be up to 10% bigger!
(When I first compiled some sections of code, I was disappointed to find no
difference in code size between 5.1 and 6.0.  Then I read on the net that
it needs space!  Glad I read the net carefully.  You will still get messages
about routines too large to optimize regardless, but this should only occur
in the larger routines.

>    4. What kind of hassles have you had with auxilliary tools (linkers,
>	debuggers, etc.)? Does everything still work ok?

Everything seemed to work well, I have only played with CV, but it seems
acceptable.  We don't play with PWB, so I haven't tested it.

>    5. Do you have a general idea of code space saved, data space saved,
>	speed improvements? (Probably also in this category: rumor has it
>	that constants can be/are now compiled into the local code segment,
>	which would be a big win in an overlaid architecture; is this true?)

  Our compiled program went from 327k to 296k.  We have not redesigned our
code to take advantage of new features in MSC 6.0.  There is the _based
keyword that might be very useful to others.

>    6. Ignoring the fact that in general you've got to upgrade your tools
>	eventually, was the benefit from upgrading more valuable than the
>	hassle, or was it more trouble than it was worth? (i.e., if you had
>	it to do over, would you wait if you could, maybe for 6.1, or even
>       switch to a different compiler :-))?

  In light of the bad code generation, I have no choice but to suggest you wait
for a bug fix.  From talking with the Tech Support people, I think they are 
having a fair number of problems with this release (they sound a bit frazzled).
I have encountered two fatal compiler errors and one example
of bad code generation using "safe" optimizations.  I made the mistake of
incorporating a few features of MSC 6.0 into my code that I really like. 
(_heapmin may be a solution to heap fragmentation!!), to go any further in
product development, I will have to go back to 5.1, or more likely put 
everything on hold until 6.1 or 6.01 or something.

  The tech support guy claimed they had massively beta tested this thing.  I
wonder what happened?  (He wondered too...)

					Tom West

rwh@me.utoronto.ca (Russell Herman) (06/14/90)

In article <1990Jun13.132843.13966@Octopus.COM> pete@octopus.COM (Pete Holzmann) writes:
>Has anybody out there been through (or most of the way through) an upgrade
>of a big pile of code from 5.1 to 6.0? If so, I (and probably others in
>this newsgroup) would love to benefit from your experience!

I was originally going to reply to Pete, but it occurred to me that I hadn't
seen any kind of general summary of MSC6 experiences starting from MSC5.1
source or other UNIX-based source.  So I'll hunker down by the campfire and
tell my story in the hopes that some other people will as well.

I guess I qualify in the MSC5.1 to 6 large project port dept.  It
was PERL, which is about as large I want to tackle single-handedly.  I started
from Diomidis Spinellis' port.  After getting it to compile under MSC5.1,
I took the precaution of also porting Larry's regression test.  Problems
showed up that were resistant to various code twiddles and optimization
toggles.

When MSC6 arrived, I pushed the source through it.  A few changes were
necessary where some CYA redeclarations for multiplatforms didn't work
any longer.  Just wait til you see what MSC6's declaration of 'errno' looks
like!  It wasn't a big deal though.  The result passed the regression
test, and you'll be seeing it announced on SIMTEL20 in the near future.
I'm also fooling around with PCCTS and GNUCHESS3.1.

Observations

	1.  The code isn't much smaller.

	2.  Some of the code looks faster, but I haven't measured anything
	    yet.  Probably won't either, too much like work :-) 

	3.  Global optimization isn't worth much.  The compiler spews a
            steady stream of 'function too large for global optimizer'
	    messages for functions that aren't absurdly large, at least
	    by professional standards.

	4.  With /Ox, it's slooow.  6 temp files can be opened.  I wouldn't
	    even dream of using the Workbench.  BTW, I'm on a 20MHz 386
	    system with a 30-odd msec disk, no caching, no RAMDISK.
	    Sucks back a lot of disk space too.

	5.  Under MSC5.1, the first thing I do to a piece of UNIX code is
	    insert a
		#define		register
	    in some central header file.  This hasn't changed!  Unfortunately
	    this may require changes in the body of the code, since
	    some code contains
		register x;
	    in lieu of
		register int x;  or  register unsigned x;
	    Portabilists please keep in mind!

	6.  I've had 2 compiler blowups, both which could be worked around.

	7.  The EMM Codeview is nice.  Contrary to early reports, I can
	    run it in a DV partition and TELIX in another.  EXT=512 is
	    on my QEMM call.  No himem.sys or anything like that;  QEMM5
	    does the whole job.

	8.  NMAKE is a long-overdue addition.  It's big enough though that
	    with large modules, there isn't enough room for it and the
	    compiler.  Ought to be smart enough to swap itself out like
	    OPUS MAKE can.

	9.  INTEL architecture highlights malloc and pointer problems.
	    On a few other ports, I've hung or trashed memory, gotten
	    close to isolating the problem, and sent info back to authors.
	    I always get back a fix :-)

The bottom line is, I don't regret having spent the $$.  I do hope, though,
to see a 6.1 upgrade in about six months for around $25 that will fix some
of the bugs/problems.

Russ Herman
INTERNET: rwh@me.utoronto.ca  UUCP: ..uunet!utai!me!rwh

pete@Octopus.COM (Pete Holzmann) (06/14/90)

Thanks, everybody!

It looks like we'll be waiting for another release. Or maybe taking a look
at other compiler vendors in the not-too-distant future.

One hint: are you having trouble with malloc'd memory fragmentation? If you
don't mind the overhead of wasting a paragraph (16 bytes) per malloc and
a 16 byte granularity, just call the DOS malloc Int21 calls! We gave up
on MSC's malloc routines and went straight to DOS. No more hassles, works
great every time, and you can even manage the allocation strategy (most
stuff gets allocated normally, but we switch to the high-RAM strategy in
our database section; this avoids fragmentation of the low-RAM blocks!)

Pete
-- 
Peter Holzmann, Octopus Enterprises   |(if you're a techie Christian & are
19611 La Mar Ct., Cupertino, CA 95014 |interested in helping w/ the Great
UUCP: {hpda,pyramid}!octopus!pete     |Commission, email dsa-contact@octopus)
DSA office ans mach=408/996-7746;Work (SLP) voice=408/985-7400,FAX=408/985-0859

streich@boulder.Colorado.EDU (Mark Streich) (06/14/90)

In article <90Jun13.221146edt.20004@me.utoronto.ca> rwh@me.utoronto.ca (Russell Herman) writes:
>
>	3.  Global optimization isn't worth much.  The compiler spews a
>            steady stream of 'function too large for global optimizer'
>	    messages for functions that aren't absurdly large, at least
>	    by professional standards.
>
Does anyone have experience using this compiler under OS/2?  I wonder if
these memory problems go away then.

cramer@optilink.UUCP (Clayton Cramer) (06/15/90)

In article <1990Jun13.132843.13966@Octopus.COM>, pete@Octopus.COM (Pete Holzmann) writes:
> I'm working on a very large commercial software project using MSC 5.1 and
> some assembler. It is complex, huge, hairy, etc. Doesn't use Windows,
> but does use graphics, interrupts, EMS, lots and lots of overlays, etc.
> 
> We're contemplating an upgrade to MSC 6.0, hoping for (a) smaller code;
> (b) faster code; (c) better operation of CodeView under MagicCV; (d) fewer
> bugs in the MSC compiler. Most especially (a). We have source code for
> everything but our graphics library (GSS*CGI), which is hopefully not a
> problem.
> 
> Has anybody out there been through (or most of the way through) an upgrade
> of a big pile of code from 5.1 to 6.0? If so, I (and probably others in
> this newsgroup) would love to benefit from your experience! If somebody
> from MicroSoft cares to comment on internal product upgrade experience,
> or to summarize feedback from customers, that would be great too...
> 
>     1. In general, did it go very smoothly? (i.e. no gotcha's vs. so many
> 	gotcha's we cried for a month... :-))

No.  There were continuing problems with the linker, but those seem
to have gone away now.  It *appears* that we needed to recompile code
used to make libraries.  There are also some new symbols associated
with _chkstk, and if you have rewritten _chkstk, as we have, you 
will need to make sure that you have the following symbols present,
or you will still get the standard chkstk from the library:

__chkstk
STKHQQ
__aFchkstk
__aaltstkovr

>     2. What compiler switches did you end up using in order to have code
> 	that always runs ok without tweaking things everywhere? (Or did
> 	you end up tweaking everywhere?)

The only problem with switches was that some code didn't work unless
we disabled optimization with -Od.  (I've already posted about this
elsewhere).

>     3. What kind of global tweaks were required? (i.e. under MSC 5.1, you
> 	must make sure that any function definitions in an include file do
> 	not span more than one line, or you'll break CV...)

No problems of this sort.

>     4. What kind of hassles have you had with auxilliary tools (linkers,
> 	debuggers, etc.)? Does everything still work ok?

We were using the Everex disk caching software, and it appears that
it won't work in conjunction with the HIMEM.SYS driver and CodeView --
you start getting weird complaints about unrecoverable disk errors.
Use the SMARTDRV.SYS driver instead of EVCACHE and it works fine.

>     5. Do you have a general idea of code space saved, data space saved,
> 	speed improvements? (Probably also in this category: rumor has it
> 	that constants can be/are now compiled into the local code segment,
> 	which would be a big win in an overlaid architecture; is this true?)

If there is a size or performance improvement, it's not obvious.

>     6. Ignoring the fact that in general you've got to upgrade your tools
> 	eventually, was the benefit from upgrading more valuable than the
> 	hassle, or was it more trouble than it was worth? (i.e., if you had
> 	it to do over, would you wait if you could, maybe for 6.1, or even
> 	switch to a different compiler :-))?
> 
> Peter Holzmann, Octopus Enterprises   |(if you're a techie Christian & are

The new CodeView with 6.0 is a big improvement.  Because it can take
advantage of extended memory for symbol space, and apparently loads
part of CodeView up there, we no longer need to use MagicCV to debug
our humongous application; this is a big advantage, because it means
we can now debug on 286 systems with extended memory; MagicCV requires
a 386 system.  Also, the new CodeView using extended memory is MUCH
faster than using MagicCV and expanded memory.  We also don't run
into problems with too many symbols anymore, which means we no longer
have to decide what modules we want to debug at any given time.

Overall, it was worth the hassle to upgrade.
-- 
Clayton E. Cramer {pyramid,pixar,tekbspa}!optilink!cramer
Pipe bomb: appropriate technology for living lightly on Mother Earth. :-)
----------------------------------------------------------------------------
Disclaimer?  You must be kidding!  No company would hold opinions like mine!

mshiels@tmsoft.uucp (Michael A. Shiels) (06/15/90)

Overall we have not had luck with MSC 6.0 in the floating point code generation.
I posted a piece of code a couple of days ago which showed how simple code could
get the compiler all confused.  Overall we are not too impressed at the moment 
since they also changed some of there internal helper functions and now we have
to figure tham out and add them to our custom standard library.

david@csource.OZ.AU (david nugent) (06/17/90)

In <1990Jun14.134206.24858@Octopus.COM> pete@Octopus.COM (Pete Holzmann) writes:

>It looks like we'll be waiting for another release. Or maybe taking a look
>at other compiler vendors in the not-too-distant future.

JPI TopSpeed C deserves a good look at.  It performs excellent optimisation
with almost no problems with the ability to pass parameters in registers
(completely customisable).  And all that with very little loss in speed -
the compiler's speed rates about the same as TCC 2.0.

The libraries (for which you can obtain source at a reasonable cost) all
use register calling conventions - unlike MSC's library in which most are
_cdecl.  The library has a high compatibility with both TC and MSC.


david

-- 
         * Unique Computing Pty Ltd, Melbourne, Australia. *
       david@csource.oz.au  3:632/348@fidonet  28:4100/1@signet