[comp.sys.ibm.pc] Looking for a decent C compiler

conan@vax1.acs.udel.EDU (Robert B Carroll) (05/31/89)

I'm looking for a C compiler/linker for a DOS based machine.
I want one that doesn't require overlays to get over a 640K
memory limit. It should have standard libraries and shouldn't
have any limits on structure or array sizes(ie. 64K page bs).
complete K&R compatible. oh yeah, quickk an fast.
does one exist or am i going to have to keep using some
version of unix? send email to:


-- 
conan@vax1.acs.udel.edu OR conan@192.5.57.1
CONAN THE BARBARIAN of Cimmeria

leonard@bucket.UUCP (Leonard Erickson) (06/04/89)

In article <3741@udccvax1.acs.udel.EDU> conan@vax1.acs.udel.EDU (Robert B Carroll) writes:
<I'm looking for a C compiler/linker for a DOS based machine.
<I want one that doesn't require overlays to get over a 640K
<memory limit. It should have standard libraries and shouldn't
<have any limits on structure or array sizes(ie. 64K page bs).
<complete K&R compatible. oh yeah, quickk an fast.
<does one exist or am i going to have to keep using some
<version of unix? send email to:

If you are running MS-DOS you *can't* get around the 64k structure limit
or the 640k limit without some *very* messy programming. MS-DOS runs in
"real" mode. In real mode you are running a 8086 for all practical
purposes. to get access to more than 640k of memory you have to switch
to protected mode. And then DOS won't work.

	he 64k structure limit involve the 64k segment size of the
8088/86 and 80286. If you have a 386 *and* are willing to not use DOS then
you have no problems...But it looks like want to use DOS... :-(

-- 
Leonard Erickson		...!tektronix!reed!percival!bucket!leonard
CIS: [70465,203]
"I'm all in favor of keeping dangerous weapons out of the hands of fools.
Let's start with typewriters." -- Solomon Short

mcdonald@uxe.cso.uiuc.edu (06/06/89)

(Person wants to break 640K barrier and not worry about 64K segments)

>If you have a 386 *and* are willing to not use DOS then
>you have no problems...But it looks like want to use DOS... :-(

If you want to use DOS and break the 640K /64K segment barrier, and
have a 386, there is a solution: Run in 386 mode with the Phar Lap
runtime system and the MicroWay or Metaware C or MicroWay Fortran
compilers. It is exactly like a32 bit DOS. Works great. AND it
will
multitask under Desqview.

Doug MCDonald

phil@ux1.cso.uiuc.edu (06/07/89)

What's wrong with a C compiler than knows if an array is larger than a 64K
segment that it just simply has to generate code that can calculate the
correct segment address and displacement each time it needs to access an
array location.

It's not THAT hard to do, but sure, it will slow down the program.  What would
be nice is a compiler than CAN do this if the array is declared obviously
larger than 64K.  Also, pointer arithmetic may need this code, and this
should be specifiable as a compile option as well.

So, can any 8086 C compiler DO THAT?

--Phil howard--  <phil@ux1.cso.uiuc.edu>

brown@m.cs.uiuc.edu (06/08/89)

/* Written  5:09 pm  Jun  6, 1989 by phil@ux1.cso.uiuc.edu in m.cs.uiuc.edu:comp.sys.ibm.pc */
What's wrong with a C compiler than knows if an array is larger than a 64K
segment that it just simply has to generate code that can calculate the
correct segment address and displacement each time it needs to access an
array location?

It's not THAT hard to do, but sure, it will slow down the program.  What would
be nice is a compiler than CAN do this if the array is declared obviously
larger than 64K.  Also, pointer arithmetic may need this code, and this
should be specifiable as a compile option as well.

So, can any 8086 C compiler DO THAT?

--Phil howard--  <phil@ux1.cso.uiuc.edu>
/* End of text from m.cs.uiuc.edu:comp.sys.ibm.pc */


    Both the Lattice C and the Microsoft C (and doubtlessly many others)
can do (most) of this.  

    In Lattice C, all pointers in large model (L) or large data model (D)
are managed by function calls which compute segment and offset locations
whenever a pointer is referenced.  Since this happens for all pointer
references without regard to the size of the object that they reference,
performance is low for all pointer operations.  

    Microsoft C takes a different approach:  by default, pointers are
manipulated by operations on the offset only (the object is presumed to be
less than 64K in size whether it is "near" (referenced by a 16-bit pointer
for an object in the default data segment) or "far" (referenced by a 32-bit
pointer for an object in an additional data segment)).  This may be
overridden by including the "huge" keyword in a declaration.  When a "huge"
pointer is referenced, both the segment and offset are calculated.  

    As far as having arrays larger than 64K automatically detected and
supported, there are some problems.  Since auto variables typically are on
the stack, and the stack is in a segment which can be no larger than 64K,
then auto arrays larger than 64K present some real obstacles.  I'm not
aware of how either of the above compilers handles this case (I believe
that it generates a compile-time error in both).  Typically such arrays are
allocated from the heap (via malloc).  


	William Brown
	brown@cs.uiuc.edu
	University of Illinois at Urbana-Champaign

jca@pnet01.cts.com (John C. Archambeau) (06/09/89)

Yes, Turbo C can juggle arrays that are greater than 64K through the far heap
and huge pointers.  A huge pointer does the necessary overhead work of making
sure that the pointer points to the correct area in memory when it goes beyond
a segment boundary.
 
 /*--------------------------------------------------------------------------*
  * That's not an operating system, this is an operating system!
  *--------------------------------------------------------------------------*
  * UUCP: {nosc ucsd hplabs!hp-sdd}!crash!pnet01!jca
  * APRA: crash!pnet01!jca@nosc.mil
  * INET: jca@pnet01.cts.com
  *--------------------------------------------------------------------------*/
  
#include <disclaimer.h>

las) (06/10/89)

In article <8000050@m.cs.uiuc.edu> brown@m.cs.uiuc.edu writes:
>    In Lattice C, all pointers in large model (L) or large data model (D)
>are managed by function calls which compute segment and offset locations
>whenever a pointer is referenced.  Since this happens for all pointer
>references without regard to the size of the object that they reference,
>performance is low for all pointer operations.  

Yes, this is the default for Lattice' large model - option "-ml" on the lc
command line.  Essentially this is "Huge" model in Turbo C and Microsoft C.
I think Lattice calls these pointers "normalized."  If you use the "-mls"
option, you essentially get the Turbo/Microsoft "Large" model in which 
arithmetic is performed on only the offset and no normalizing is performed.

Similarly, a "-md" gives you the Lattice large data/small code model with 
normalizing and "-mds" gives you the offset arithmetic with no normalizing.

regards, Larry
-- 
Signed: Larry A. Shurr (cbema!las@att.ATT.COM or att!cbema!las)
Clever signature, Wonderful wit, Outdo the others, Be a big hit! - Burma Shave
(With apologies to the real thing.  The above represents my views only.)
(Please note my mailing address.  Mail sent directly to cbnews doesn't make it.)

fredex@cg-atla.UUCP (Fred Smith) (06/22/89)

In article <1989Jun17.081153.19335@ziebmef.uucp> mdfreed@ziebmef.UUCP (Mark Freedman) writes:
>
>(if you're running MS-DOS, you *can't* get around the 64k structure limit)
>  
>  I thought that one could dynamically allocate objects larger than 64K in the
>HUGE memory model, or by using HUGE pointers. According to the reference for
>Turbo C 2.0, farmalloc() can allocate blocks larger than 64K (from the far heap)
>. I don't have Microsoft C, but I suspect that it is similar (except for the
>bugs introduced by the optimizer :-)).



In Microsoft C HUGE space can be allocated with _halloc(). I believe
that the MS documentation says that in HUGE model that malloc() is
mapped to _halloc(). You can also explicitly declare a variable as a 
pointer to a huge object, then initialize it thusly:

     char huge * p_huge

     p_huge = (char huge *)_halloc(size);


(Please note that I haven't actually done this today, but I HAVE
been through that section of the MS manuals multiple times, so I feel
safe in saying this.)

Good luck!

Fred

stuart@bms-at.UUCP (Stuart Gathman) (06/30/89)

In article <7272@cg-atla.UUCP>, fredex@cg-atla.UUCP (Fred Smith) writes:
> In article <1989Jun17.081153.19335@ziebmef.uucp> mdfreed@ziebmef.UUCP (Mark Freedman) writes:

> >(if you're running MS-DOS, you *can't* get around the 64k structure limit)

> >  I thought that one could dynamically allocate objects larger than 64K in the
> >HUGE memory model, or by using HUGE pointers. According to the reference for

This is almost true.  You can allocate arrays > 64K with huge model/keyword.
HOWEVER, each array element must still be < 64K.  In fact, each element must
be of a size evenly divisible into 65536!  (Some compilers may pad for this
automatically.  MSC 3.0 does not.)  The reason is that references to a fund-
amental type cannot cross a segment boundary.  Yes, the compiler could
insert extra padding in a structure just at the segment boundary, but getting
all the structures in an arbitrary array to start at a segment boundary is
the real problem.

PS.  No flames about stupid segments.  The 286 is a 16 bit processor.  The
fact that it can run 32-bit programs with a few kludges in no way
detracts from it's 16-bit performance.  (Which is faster than a 386 at
the same clock, BTW.  Yes, 25Mhz 286s are available.)
-- 
Stuart D. Gathman	<stuart@bms-at.uucp>
			<..!{vrdxhq|daitc}!bms-at!stuart>

Ralf.Brown@B.GP.CS.CMU.EDU (06/30/89)

In article <166@bms-at.UUCP>, stuart@bms-at.UUCP (Stuart Gathman) writes:
}In article <7272@cg-atla.UUCP>, fredex@cg-atla.UUCP (Fred Smith) writes:
}This is almost true.  You can allocate arrays > 64K with huge model/keyword.
}HOWEVER, each array element must still be < 64K.  In fact, each element must
}be of a size evenly divisible into 65536!  (Some compilers may pad for this
}automatically.  MSC 3.0 does not.)  The reason is that references to a fund-
}amental type cannot cross a segment boundary.  Yes, the compiler could

This is not necessary, since all huge pointer are normalized after every
pointer operation, resulting in an offset that is never greater than 15.
Thus, the array elements can be any size up to 65520 bytes, as it is
impossible for an element to cross a HUGE pointer's segment limit unless the
item is at least 65521 bytes.

--
UUCP: {ucbvax,harvard}!cs.cmu.edu!ralf -=-=-=- Voice: (412) 268-3053 (school)
ARPA: ralf@cs.cmu.edu  BIT: ralf%cs.cmu.edu@CMUCCVMA  FIDO: Ralf Brown 1:129/46
			Disclaimer? I claimed something?
"When things start going your way, it's usually because you stopped going the
 wrong way down a one-way street."

ee-sno@wasatch.utah.edu (Niel Orcutt) (07/01/89)

I have both the MicroSoft C compiler v 5.1 and the Borland C compiler
version 2.0.  The MicroSoft C compiler stores huge pointers with
four bytes of value in the offset and one byte in the segment.  This
causes the problem mentioned by an earlier poster, that elements
in the structure stored in a huge area of memory should be arranged
such that an element at the end of a 64k boundary end exactly at the 
boundary; otherwise, the last element will be partly fetched from the
end of the 64k area and partly from the beginning of that same area.
This method, however, has the advantage that pointer renormalization
only needs to occur once every 64k.  The Borland compiler stores
four bytes of value in the segment and one byte in the offset; this
eliminates the above problem but causes that the pointer be renormalized
every 16 bytes.  I don't know how much slower this method is . . .

mdfreed@ziebmef.uucp (Mark Freedman) (07/22/89)

(if you're running MS-DOS, you *can't* get around the 64k structure limit)
  
  I thought that one could dynamically allocate objects larger than 64K in the
HUGE memory model, or by using HUGE pointers. According to the reference for
Turbo C 2.0, farmalloc() can allocate blocks larger than 64K (from the far heap)
. I don't have Microsoft C, but I suspect that it is similar (except for the
bugs introduced by the optimizer :-)).

jca@pnet01.cts.com (John C. Archambeau) (07/23/89)

Just out of curiousity, what's the problem with > 64K structs?  Not that I've
ever had a reason to use them.  Is it the addressing mode that structs use?
I know > 64K arrays work in the compact, large, and huge models under TC 1.5.

 /*--------------------------------------------------------------------------*
  * Flames: /dev/null (on my Minix partition)
  *--------------------------------------------------------------------------*
  * APRA  : crash!pnet01!jca@nosc.mil
  * INET  : jca@pnet01.cts.com
  * UUCP  : {nosc ucsd hplabs!hd-sdd}!crash!pnet01!jca
  *--------------------------------------------------------------------------*/
  
#include <disclaimer.h>

void main (void)
{

#if defined (MSDOS) || defined (OS2) || defined (VMS)
 printf ("You call that an operating system???\n");
#else
 printf ("Unix might not be perfect...\n");
 printf ("  ...but it's the best I've seen thus far...\n");
#endif
}