[comp.sys.ibm.pc] Microsoft C 4.0 large models

nwc@cucca.columbia.edu (Nicholas W. Christopher) (07/31/87)

I wrote a language and interpreter using Lex, YACC and curses on a vax
and am now attempting to get it to run under MSC 4.0 .  The language
is window oriented and the PC-curses windows require 4K+.  My code
bottoms out in 5 or 6 windows (I malloc space for other things as well) in
the small model.

I tried compiling under a large model and the code ran for a while and then
froze. My question is, what are the things to worry about in large models?
Are there special considerations when passing arguements by address? Will
short integers behave? Where should I be looking (codeview just hangs so its
no help) ?

Thanks,
/nwc

P.S. I am really sick of seeing "Unknown Compiler Error, Contact Microsoft
Technical Support", I hope 5.0 does not loose as much as 4.0.
	

darrylo@hpsrlc.HP.COM (Darryl Okahata) (07/31/87)

In comp.sys.ibm.pc, nwc@cucca.columbia.edu (Nicholas W. Christopher) writes:

> I wrote a language and interpreter using Lex, YACC and curses on a vax
> and am now attempting to get it to run under MSC 4.0 .  The language
> is window oriented and the PC-curses windows require 4K+.  My code
> bottoms out in 5 or 6 windows (I malloc space for other things as well) in
> the small model.
> 
> I tried compiling under a large model and the code ran for a while and then
> froze. My question is, what are the things to worry about in large models?
> Are there special considerations when passing arguements by address? Will
> short integers behave? Where should I be looking (codeview just hangs so its
> no help) ?
> 
> Thanks,
> /nwc
> 
> P.S. I am really sick of seeing "Unknown Compiler Error, Contact Microsoft
> Technical Support", I hope 5.0 does not loose as much as 4.0.
> 	
> ----------

      The BIGGEST problem (in my opinion) of transporting code from UN*X to
MSDOS is the usage of 0 (in UN*X) for NULL.  In the small memory model,
this is no problem, as the size of a pointer is the same size as an
integer.  However, in the large memory model, the size of a pointer (4
bytes) is TWICE the size of an integer (2 bytes).  This becomes a problem
when you try to pass a NULL pointer to a function.  Let's say that you have
a program fragment like:

	#include <stdio.h>	/* IMPORTANT!!!!! */

	bomb(ptr)
	char	*ptr;
	{
		if (ptr != NULL)
			*ptr = 'X';
	}

	foo()
	{
		bomb(0);	/* this blows up in the large memory model */
		bomb(NULL);	/* this works fine */
	}

Assuming the large memory model, when foo() calls bomb(0), two bytes (an
integer) of zeros are pushed onto the stack.  When bomb() checks the value
of ptr, it is looking for a four-byte object (a pointer to char); the
pointer offset will be zero (this is what was pushed onto the stack), but
the pointer segment will be whatever is there on the stack and will, in all
probability, be nonzero.  The conditional expression in the if statement in
bomb() will, as a result, be true (because the pointer segment is probably
nonzero), and some random location in memory will be trampled.  The result?
Crash, boom, bang!

     Why does NULL work instead of a "0"?  Well, when <stdio.h> is included
(this is included, isn't it?), NULL is either set to a "0" or "0L" ("0" for
small memory models, and "0L" for large ones), which takes care of the
problem quite nicely (and transportably, I might add).

     -- Darryl Okahata
	{hplabs!hpcea!, hpfcla!} hpsrla!darrylo
	CompuServe: 75206,3074

Disclaimer: the above is the author's personal opinion and is not the
opinion or policy of his employer or of the little green men that
have been following him all day.

davis@bdmrrr.bdm.com (Arthur Davis x4675) (08/03/87)

If you have moved your code to a large model, I hope you have changed
your malloc calls to _fmalloc (and free to _ffree).  You can get some
strange results using malloc in a far environment.  One result you won't
get is the compiler message "Oh gosh, you really shouldn't use malloc in
a large model".  Not to start an argument with anyone, but it is for
reasons such as these that I love 68000-family architectures.  Good luck.

platt@emory.uucp (Dan Platt) (08/04/87)

In article <3320039@hpsrlc.HP.COM> darrylo@hpsrlc.HP.COM (Darryl Okahata) writes:
>In comp.sys.ibm.pc, nwc@cucca.columbia.edu (Nicholas W. Christopher) writes:
>
>> I wrote a language and interpreter using Lex, YACC and 
curses on a vax...
>> I tried compiling under a large model and the code ran for a while and then
>> froze. My question is, what are the things to worry about in large models?


There may also be problems with the amount of space required by a
malloc'ed data object (if you malloc something larger than 64k
the routine may lock up).  For this reason, huge structures and pointers are 
available, and halloc works like malloc to allocate memory but returns
huge model compatible pointers.

Also, you might want to check to see if you're running out of memory
(if your job calls in other process via 'system();' you may be
running into some problems (I've had problems with 'make' provided
with msc v4.00 if my segments are too large).

Dan

darrylo@hpsrlc.HP.COM (Darryl Okahata) (08/04/87)

In comp.sys.ibm.pc, davis@bdmrrr.bdm.com (Arthur Davis x4675) writes:

> If you have moved your code to a large model, I hope you have changed
> your malloc calls to _fmalloc (and free to _ffree).  You can get some
> strange results using malloc in a far environment.  One result you won't
> get is the compiler message "Oh gosh, you really shouldn't use malloc in
> a large model".  Not to start an argument with anyone, but it is for
> reasons such as these that I love 68000-family architectures.  Good luck.
> ----------

     As long as you stick with using only one memory model (and include the
proper <.h> files), you can use malloc() without any problems.  It's only when
you start mixing models (like mixing the small and large models) that large
monsters reach up and bite various sensitive parts of one's body :-).

     -- Darryl Okahata
	{hplabs!hpcea!, hpfcla!} hpsrla!darrylo
	CompuServe: 75206,3074

Disclaimer: the above is the author's personal opinion and is not the
opinion or policy of his employer or of the little green men that
have been following him all day.

greg@gryphon.CTS.COM (Greg Laskin) (08/05/87)

In article <880@bdmrrr.bdm.com> davis@bdmrrr.bdm.com (Arthur Davis x4675) writes:
>If you have moved your code to a large model, I hope you have changed
>your malloc calls to _fmalloc (and free to _ffree).  You can get some
>strange results using malloc in a far environment.  

malloc and free work just fine in large model.  _fmalloc and _ffree are
for hybid models.  For example, to allocate data referenced by a far
pointer in a SMALL model program, you have to use _fmalloc, but in a
LARGE model program, malloc works fine because you link the program to
the large model library.


-- 
Greg Laskin   
"When everybody's talking and nobody's listening, how can we decide?"
INTERNET:     greg@gryphon.CTS.COM
UUCP:         {hplabs!hp-sdd, sdcsvax, ihnp4}!crash!gryphon!greg
UUCP:         {philabs, scgvaxd}!cadovax!gryphon!greg

ehughes@violet.berkeley.edu (08/05/87)

On many DOS compilers, the allocated memory pool is taken from a single
segment, i.e. 64K max.  This is true EVEN IN LARGE MODEL.  It is simply
an outright laziness on the part of the library writers not to take
this into consideration.  There may be performance problems using 4-byte
pointers, but that could be remedied by having optional libraries for those
cases; the linker would search the specialized libraries first and link
in the correct routines and ignore the default ones.  Another solution is
to use 2-byte pointers everywhere in a segment except for the first and last
ones.

Yet it is not done.  Chalk one up to the general immaturity of MSDOS software.

Eric Hughes
ucbvax!violet!ehughes

kneller@cgl.ucsf.edu (Don Kneller) (08/05/87)

In article <880@bdmrrr.bdm.com> davis@bdmrrr.bdm.com (Arthur Davis x4675) writes:
>If you have moved your code to a large model, I hope you have changed
>your malloc calls to _fmalloc (and free to _ffree).  You can get some
>strange results using malloc in a far environment.  One result you won't
>get is the compiler message "Oh gosh, you really shouldn't use malloc in
>a large model".  Not to start an argument with anyone, but it is for
>reasons such as these that I love 68000-family architectures.  Good luck.

Is this true?  I've never had a problem with malloc in large model
programs.  My understanding is _fmalloc is to be called from small
model programs if you want to use far pointers to access more memory
than you would normally be allowed (ie, more than 64K).  In large
model programs, all pointers are far by default.  My experience is
you can use malloc in either model with impunity.

The use of _fmalloc is only necessary in mixed model programming.
You don't have to venture into mixed model programming if you don't
want to.  With proper coding practices (as in, not assuming pointers
and ints are equivalent), the details of large versus small model
programming is handled by the compiler and libraries.

Please be more specific than "you can get some strange results".  This
is not very useful.

Don

-----
	Don Kneller
UUCP:	...ucbvax!ucsfcgl!kneller
ARPA:	kneller@cgl.ucsf.edu
BITNET:	kneller@ucsfcgl.BITNET

platt@emory.uucp (Dan Platt) (08/06/87)

In article <4589@jade.BERKELEY.EDU> ehughes@violet.berkeley.edu () writes:
>On many DOS compilers, the allocated memory pool is taken from a single
>segment, i.e. 64K max.  This is true EVEN IN LARGE MODEL.  It is simply
>an outright laziness on the part of the library writers not to take
>this into consideration....

I would like to point out that the large pointers don't handle the 
problem of an element size exceeding 64k.  However, huge structure
types do allow a pointer to a structure whose size exceeds 64k.
There are huge pointer types, and it's possible to allocate a
simple linear array that exceeds 64k in size.  In this sense the
complaint isn't fair (also, there's a memory allocation routine for
huge pointers called halloc (huge_malloc) which handles the corresponding
job.  The command line parameter allowing huge compilation is /AH.
This works fine if the large memory library is there.

Also, it's possible to allocate square arrays that exceed 64k even though
you're using only the large model....  as follows:

...
	double **a;
	a= (double **)malloc(n*sizeof(double *));
	for(i=0;i<n;i++)a[i]=(double *)malloc(n*sizeof(double));
...

This will allocate an nxn array using only large model pointers,
so long as n*sizeof(double) < 64k (which is usually true since
in most dos machines around now there wouldn't be room for 64k * 64k
arrays.

Hope this is a useful clarification.

Dan

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (08/06/87)

		H O G W A S H ! !

In article <4589@jade.BERKELEY.EDU> ehughes@violet.berkeley.edu () writes:
|On many DOS compilers, the allocated memory pool is taken from a single
|segment, i.e. 64K max.  This is true EVEN IN LARGE MODEL.  It is simply

This may be true in "many DOS compilers," but NOT true in MS C 4.0. The
only limitation is that no *one* allocation may be more than 64k. You
have made a statement which at best is incorrect, and done a vast
disservice to anyone who wants to run C on DOS.

|an outright laziness on the part of the library writers not to take

An outright laziness to post bullshit like this without ever reading the
manual or even checking with a tiny test program.

|this into consideration.  There may be performance problems using 4-byte
|pointers,
There sure are
|          but that could be remedied by having optional libraries for those
|cases; the linker would search the specialized libraries first and link
|in the correct routines and ignore the default ones.
That's just what it does, it's called "-Ml" (model: large)
|                                                      Another solution is
|to use 2-byte pointers everywhere in a segment except for the first and last
|ones.
|
|Yet it is not done.  Chalk one up to the general immaturity of MSDOS software.

This entire posting is totally without valid technical comment. I have a
number of programs which allocate large address spaces, including
MicroEMACS and several proprietary editors. You should post a public
apology for this crap.
|
|Eric Hughes
|ucbvax!violet!ehughes

Disclamer: I was a beta tester for MSC 3 and 4, and we didn't miss
anything as blatent as missing large model. I don't have any financial
interest in MS, I'm just offended by a posting which starts from an
invalid technical point and then comments on the laziness of the
compiler writers and the immaturity of MSDOS software.

-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | sesimo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

jpn@teddy.UUCP (John P. Nelson) (08/07/87)

In article <4589@jade.BERKELEY.EDU> ehughes@violet.berkeley.edu () writes:
>On many DOS compilers, the allocated memory pool is taken from a single
>segment, i.e. 64K max.  This is true EVEN IN LARGE MODEL.  ...
>There may be performance problems using 4-byte pointers ...

Either he is confused, or I am confused.

Large model implies 4 byte pointers.  I don't know about "many DOS compilers",
but with MSC 4.0, a large model program will allocate memory beyond the 64K
"default data" segment (segment group) by calling DOS memory allocation
functions.

brianc@cognos.uucp (Brian Campbell) (08/12/87)

In article <880@bdmrrr.bdm.com> davis@bdmrrr.bdm.com (Arthur Davis x4675) writes:
! 
! If you have moved your code to a large model, I hope you have changed
! your malloc calls to _fmalloc (and free to _ffree).  You can get some
! strange results using malloc in a far environment.  One result you won't
! get is the compiler message "Oh gosh, you really shouldn't use malloc in
! a large model".  Not to start an argument with anyone, but it is for
! reasons such as these that I love 68000-family architectures.  Good luck.

     I beg  to differ. The malloc/free  routines in the large  model the
*SAME*  as  the  _fmalloc/_free  routines.   The  only  problems  I  can
understand complaints about are not being able to allocate more than 64k
cleanly, and problems in using mixed models.
     P.S.  I prefer the 68000 family too.
-- 
Brian Campbell          uucp: decvax!utzoo!dciem!nrcaer!cognos!brianc
Cognos Incorporated     mail: 3755 Riverside Drive, Ottawa, Ontario, K1G 3N3
(613) 738-1440          fido: sysop@163/8