[comp.windows.ms] Pleasures and Penanties of the Large Model

slewis@aerospace.aero.org (09/05/90)

	The documentation on MS Windows advises against the use of the large
model. I understand the dangers of locking down large blocks. There are
other reasons for using the large model. In porting code between other
systems and Windows, there is a common assumption in most C's that there
is only one type of pointer. Going to the large model forces all pointers to
be far pointers (I think for this discussion we can ignore the differences in
speed between accessing near and far pointers.). Forcing all pointers
to be far pointers greatly simplifies the management of code.
	If I have a well behaved application which requires say about 10K of
local storage and 200K global storage which it manages with the GlobalAlloc
mechanism, is there any real penalty in specifying the large model and buying
a great simplification on my pointer management???

spolsky-joel@cs.yale.edu (Joel Spolsky) (09/05/90)

In article <82934@aerospace.AERO.ORG> slewis@aerospace.aero.org () writes:
>	If I have a well behaved application which requires say about 10K of
>local storage and 200K global storage which it manages with the GlobalAlloc
>mechanism, is there any real penalty in specifying the large model and buying
>a great simplification on my pointer management???

First, the penalty is that you will take a serious hit in performance
when you go to the large memory module, because pointers become 32
bits which cannot be shuffled about all at once (except in the 32 bit
memory model). Of course I would bet that most of your pointers are 32
bit LPSTR's from GlobalLock anyway...

In real-mode windows, using the large memory model locks down all your
global memory; this is not a nice thing to do especially in real mode
where there is so little memory to start with... however, in 386 enhanced
mode, large model does not lock things down because the 386 hardware
can cope with the fact that far *'s do not have to be absolute
addresses.

In short, the nicest way to forever eliminate the problems of near and
far pointers is to just require your program to run in enhanced mode
only. This might or might not be appropriate for your application. It
buys you the additional advantage that you can lock down all your
memory at allocation time and keep it locked, without hogging any
memory.

Joel Spolsky
spolsky@cs.yale.edu                                        Talk Hard.
Speaking for no one.

goodearl@world.std.com (Robert Goodearl) (09/05/90)

With regard to using large model windows programs, there is another penalty
which is not to be ignored.  You can not easily (and perhaps not at all) run
multiple instances of a large model program because of the multiple data
segments that are created in large model programming.

When writing programs for medium model, prototyping all of your functions
will help greatly in dealing with pointer types.  Also, in C 6.0 there
are model independent library functions to help out with this.
-- 
Bob Goodearl -- goodearl@world.std.com

dzoey@terminus.umd.edu (Joe I. Herman) (09/06/90)

In article <82934@aerospace.AERO.ORG> slewis@aerospace.aero.org () writes:
>
>.................... there is a common assumption in most C's that there
>is only one type of pointer. Going to the large model forces all pointers to
>be far pointers (I think for this discussion we can ignore the differences in
>speed between accessing near and far pointers.). Forcing all pointers
>to be far pointers greatly simplifies the management of code.

There is also a common assumption in most C environments that
sizeof (int) == sizeof (char *).  This is true for small model
with the exception of those pointers explicitly declared far.
Luckly, a good compiler will usually catch this type of assumption,
but a good compiler also catches near/far pointer conversions.
Function protyping is an important tool in this.

Even though you want to ignore it, there is a noticable performance
hit in using far pointers over near pointers, especially on an
i486 (if what they say in comp.arch is true).  If possible I try
and stay away from far data pointers. 


			Joe Herman
			U. of Md.

dzoey@terminus.umd.edu
--
"Everything is wonderful until you know something about it."

kensy@microsoft.UUCP (Ken SYKES) (09/07/90)

In article <82934@aerospace.AERO.ORG> slewis@aerospace.aero.org () writes:
>	If I have a well behaved application which requires say about 10K of
>local storage and 200K global storage which it manages with the GlobalAlloc
>mechanism, is there any real penalty in specifying the large model and buying
>a great simplification on my pointer management???

You won't be able to run multiple instances of your program, and if you
leave the malloc() and other C runtime memory functions as is the memory is not
only FIXED but PAGE LOCKED as well.  This is a very bad thing - page locked
means it can't be swapped to disk which means you will be confined to 
physical memory.  If you use GlobalAlloc to allocate moveable memory then
immediately lock it down you can get memory that is FIXED but not page 
locked.  My suggestion is to write it in small or medium model and replace
the pointers with far pointers.  This will make it a more friendly windows
app.

Ken Sykes
Disclaimer: The above opinions are solely my own.

rpk@rice-chex.ai.mit.edu (Robert Krajewski) (09/08/90)

Does Microsoft consider it a bug that specifying GMEM_FIXED in
GlobalAlloc wires down memory ?

Now, in some ways, it's not a bug, because the semantics correspond
with the function GlobalFix.  It's less confusing because one word
(Fix) means one thing. On the other hand, when malloc calls
GlobalAlloc in such a manner, fixing all the pages is clearly too
much, and renders one's program ill-behaved.

So will GlobalAlloc be changed, or will the Windows large model
version of malloc (_fmalloc) be changed ?
Robert P. Krajewski
Internet: rpk@ai.mit.edu ; Lotus: robert_krajewski.lotus@crd.dnet.lotus.com

griffith@hpavla.AVO.HP.COM (Jeff Griffith) (09/13/90)

I'm really surprised and dismayed that Microsoft has forced the near/far
pointer issue into the design of code.

Between '84 and '86, and for the last two years I've had to put up with 
worrying about whether I'm using far pointers or near pointers, the size
differences, whether or not a given module can access a piece of data,
changes in user interface specifications that force more and more data
to be accessed with far pointers or increase the size of modules because
everyone wants access to all data from everyplace, all of the time, etc.

The most pleasant experience I've had as a software engineer was to use
only large model (between '86 and '89). I found that the time I spent
worrying about pointer problems could be used to refine the data accessign
algorithms to the point where all performance penalties were hidden behind
the ~2 second repaint time of the application. I should tell you that these
were data analysis programs, using up to 6 processes to handle between 4 Meg
and 10 Meg of data, running under Xenix 3.3 to 3.5 on Intel 286 based
multiuser systems, and writting on VT-220 terminals. Most of the performance
was gained but using multiple processes to handle the various data flows;
in all cases, I was able to divide the program into various event handlers,
and each handler had a "block" point so that, in most cases, only one process
was running at any given time; the few cases of overlap occurred when the user
was walking through the menu and the program was reading data. The data stream
could have been held off. Otherwise, the use of multiple process simplified 
the coding by isolating the data. Such a system could be more easily built
using a single process and C++.

Additionally, I was able to respond to changes in customer requirements more
rapidly, program debugging time was reduced (which gave me even more time to
spend with customers), and program reliability improved.

Whatever speed you gain (less than 25%) by changing memory model, you can
also gain by improved program design. If you really need speed after you've
got a clean design, you can take the ".s" file produced by the compiler
and use it as the basis for writing assembly code; you can probably get
another 50% that way. But don't try to use the near/far pointer argument
for hiding a poor design; if you want speed, you have to design for it.

cb@sequoia.execu.com (Christopher D. Brown) (09/13/90)

In article <16980007@hpavla.AVO.HP.COM> griffith@hpavla.AVO.HP.COM (Jeff Griffith) writes:
Amen to Jeff Griffith's observation (if I may paraphrase) that optimal
optimization occurs at conceptual/design level.  If a program is outside
of its performance/size requirements, Intel processor programming model
selection is not a promising strategy to reconcile the problem.

My humble opinion ... Chris "Large Model" Brown
-- 
Christopher D. Brown

Digital: {uunet|cs.utexas.edu}!execu!cb
Analog: (512) 327-7070
Physical: Execucom, 108 Wild Basin Road, Two Wild Basin, Austin, TX 78764

kensy@microsoft.UUCP (Ken SYKES) (09/16/90)

In article <16980007@hpavla.AVO.HP.COM> griffith@hpavla.AVO.HP.COM (Jeff Griffith) writes:
>I'm really surprised and dismayed that Microsoft has forced the near/far
>Whatever speed you gain (less than 25%) by changing memory model, you can
>also gain by improved program design. If you really need speed after you've
>got a clean design, you can take the ".s" file produced by the compiler
>and use it as the basis for writing assembly code; you can probably get
>another 50% that way. But don't try to use the near/far pointer argument
>for hiding a poor design; if you want speed, you have to design for it.

PLEASE don't edit assembler code generated by the compiler.  This violates
some of the very same rules the rest of your message points out: Writing 
code that is algorithmically efficient and maintainable.  Editing compiled
code blows maintainability right out the door.  Are you going to throw
away the original C code and stick with the assembler, or are you going to
"optimize" every time the C code needs to change?  This is bad news.
Instead of doing that find some other way to optimize the code at the C
level.  Maybe use a few NEAR pointers where necessary.  Yes I know you
hate them but they perform some of the operations that your asm code tweaks
would and its much easier to maintain.  

Ken Sykes
Disclaimer: The above opinions are solely my own.