[net.micro] Small, Medium and Large Models

LMTRA@SRI-KL.ARPA (07/16/85)

Could anybody out there take the time to give me a definition of the sizes
of memory models referred to as small, large and whatever.  Are there specific
sizes intended or is it a matter of rough scale?

Thanks,
LMTRA@SRI-KL.arpa
(Leon Traister)

-------

how@SU-ISL.ARPA (07/16/85)

Specifically:
	small model means that all addresses can be specified with only
		a 16bit offset;
	large model means that it requires a 16bit offset and 16bit segment
		number (i.e., more than 64k addressability).
since this can be true of either the data or the code space independently,
some compilers provide you with four choices.
note, however, with some C compilers this does NOT necessarily mean that
arrays can be larger than 64k, since the arithmetic is usually restricted
to the 16bit offset part. this also means that subtracting pointers that
are in different segments in a program compiled using a large data model
may not work.
both large and small work equally in code space, except that large model
calls will be slower.
hope this is specific enough.

Dana How

jshaver@APG-3.ARPA (John Shaver STEEP-TM-AC 879-7602) (07/16/85)

Are you really talking about micros, minis or main frames?  Are you taling
about a MICRO-VAX II or a Motorola 6800?

rlk@wlcrjs.UUCP (Richard L. Klappal) (07/21/85)

>
>Could anybody out there take the time to give me a definition of the sizes
>of memory models referred to as small, large and whatever.  Are there specific
>sizes intended or is it a matter of rough scale?
>
>Thanks,
>LMTRA@SRI-KL.arpa
>(Leon Traister)
>
>-------

I don't know how standard it is, if at all, but Lattice C on the
IBM PC specifies 4 memory models

	64 K prog space + 64 K data space
	64 K prog       + 1 M data
	1 M prog	+ 64 K data
	1 M prog	+ 1 M data

The different models vary in the way data pointers are defined (shorter/longer)
and whether calls / returns are "near/far".  I think that on the 808x, any-
thing over 64 K MUST be a FAR (the memory reference that is stacked is more
than 16 bits.)



Richard Klappal

UUCP:		..!ihnp4!wlcrjs!uklpl!rlk  | "Money is truthful.  If a man
MCIMail:	rklappal		   | speaks of his honor, make him
Compuserve:	74106,1021		   | pay cash."
USPS:		1 S 299 Danby Street	   | 
		Villa Park IL 60181	   |	Lazarus Long 
TEL:		(312) 620-4988		   |	    (aka R. Heinlein)
-------------------------------------------------------------------------

cramer@kontron.UUCP (Clayton Cramer) (07/25/85)

> 
> Could anybody out there take the time to give me a definition of the sizes
> of memory models referred to as small, large and whatever.  Are there specific
> sizes intended or is it a matter of rough scale?
> 
> Thanks,
> LMTRA@SRI-KL.arpa
> (Leon Traister)
> 
> -------

On the Intel 8086 family of processors (yes, I know many of you consider this
family to consist entirely of the illegitimate and addlepated), there are
several different memory models because of the segmented architecture of
the 8086 family.  For Microsoft and Intel software, the following models 
have the following names:

Small           all code in one 64K segment
                all data in one 64K segment
                
Medium          code can occupy several segments, none more than 64K long
                all data in one 64K segment

Large           code can occupy several segments, none more than 64K long
                data can occupy several segments, none more than 64K long
                
There are reputed to be other models supported by other compiler writers,
with names like Tiny and Huge, but in what way they differ from the
"official" models, I'm not sure.  (Maybe there's room for more models, with
names like Federal.)

bright@dataio.UUCP (Walter Bright) (07/29/85)

In article <411@kontron.UUCP> cramer@kontron.UUCP (Clayton Cramer) writes:
>Small           all code in one 64K segment
>                all data in one 64K segment (including stack)
>                
>Medium          code can occupy several segments, none more than 64K long
>                all data in one 64K segment (including stack)
>
>Large           code can occupy several segments, none more than 64K long
>                data can occupy several segments, none more than 64K long
		 Also, address calculations are done on the offset portion
		 of the address only, thus limiting arrays and dynamically
		 allocated memory to chunks smaller than 64k. Pointers
		 cannot be subtracted if their segment values are different.
>                
>There are reputed to be other models supported by other compiler writers,
>with names like Tiny and Huge, but in what way they differ from the
>"official" models, I'm not sure.  (Maybe there's room for more models, with
>names like Federal.)

Tiny		Code plus statically allocated data fits in 64k. This is
		so a .COM file can be created.

Huge		Address calculations are done on both the segment and
		offset. Statically allocated arrays still must be less
		than 64k.

Large Data	Data can be > 64k, but code must be < 64k.

Under any model, the stack size is limited to 64k.

peter@kitty.UUCP (Peter DaSilva) (08/06/85)

> In article <411@kontron.UUCP> cramer@kontron.UUCP (Clayton Cramer) writes:
> >Small           all code in one 64K segment
> >                all data in one 64K segment (including stack)
		(in Lattice & Microsoft C: all code & data in 1 64K seg)
> >                
> >Medium          code can occupy several segments, none more than 64K long
> >                all data in one 64K segment (including stack)
		(in Lattice & Microsoft: code in 64K, data in large
		 address space)
> >
> >Large           code can occupy several segments, none more than 64K long
> >                data can occupy several segments, none more than 64K long
> 		 Also, address calculations are done on the offset portion
> 		 of the address only, thus limiting arrays and dynamically
> 		 allocated memory to chunks smaller than 64k. Pointers
> 		 cannot be subtracted if their segment values are different.
		(in Lattice & Microsoft C: address calculations are done on
		 the entire pointer. You can disable this with the -s flag for
		 speed)
> >                
> >There are reputed to be other models supported by other compiler writers,
> >with names like Tiny and Huge, but in what way they differ from the
> >"official" models, I'm not sure.  (Maybe there's room for more models, with
> >names like Federal.)
> 
> Tiny		Code plus statically allocated data fits in 64k. This is
> 		so a .COM file can be created.
		(Lattice small model)
> 
> Huge		Address calculations are done on both the segment and
> 		offset. Statically allocated arrays still must be less
> 		than 64k.
		(Lattice large model, but static arrays can be larger than 64K)
> 
> Large Data	Data can be > 64k, but code must be < 64k.

( Large Prog	Prog can be >64K, but data must be <64K)
> 
> Under any model, the stack size is limited to 64k.

Translation: nobody has a coherent description for memory models yet.

johnl@ima.UUCP (08/07/85)

There shouldn't be much mystery about the 8086 and 8088 addressing models,
so here they are (at least as my friends and I understand them.)

Tiny:  Code and data share one 64K segment.  Can be made into .COM file.

Small:  Code and data each in a 64K segment.  Can sometimes be made into
	a .COM file.

Medium:  Multiple code segments, one 64K data segment.

Compact:  One code segment, multiple data segments, each no greater than 64K.

Large:  Multiple code and data segments, each no greater than 64K.

Huge:  Multiple code segments, simulate linear data addressing so it looks
	like one 1MB data segment.

With most compilers, generated code will be fastest for tiny model, and
slower for each other model in order.  The penalty for multiple code segments
is much less than that for multiple data segments, and the penalty for huge
model code is, well, huge.

Also, CP/M-86 only loads .COM files, so medium and above models do not work.
MS-DOS can load any of them, as can Xenix.  PC/IX is medium model only.
I don't know about the various Intel operating systems.

John Levine, ima!johnl

wjafyfe@watmath.UUCP (Andy Fyfe) (08/08/85)

> >Small           all code in one 64K segment
> >                all data in one 64K segment (including stack)
>		(in Lattice & Microsoft C: all code & data in 1 64K seg)

As far as the compiler is concerned, these need not be different.  It
simply keeps the data and code separate, and uses 64k addressing for
everything.  It is then up to the loader to decide whether to use a
single 64k segment or not.  In the case of Microsoft Xenix, the loader
will split the code and data in the middle and large models only by
default, but will also split them in the small model if you give it
the right magic option (either to cc, or ld directly).

--Andy Fyfe		...!{decvax, allegra, ihnp4, et. al}!watmath!wjafyfe
			wjafyfe@waterloo.csnet

henry@utzoo.UUCP (Henry Spencer) (08/11/85)

> Translation: nobody has a coherent description for memory models yet.

Further translation:  if they tell you that the machine you're about to
buy has several different memory models, buy something else.	
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

peter@baylor.UUCP (Peter da Silva) (08/16/85)

> > Translation: nobody has a coherent description for memory models yet.
> 
> Further translation:  if they tell you that the machine you're about to
> buy has several different memory models, buy something else.	

I wish I could, but it ain't my machine, it's the company's. What's the
best way to deal with an IBM-PC-clone? (rhetorical question)
-- 
	Peter da Silva (the mad Australian werewolf)
		UUCP: ...!shell!neuro1!{hyd-ptd,baylor,datafac}!peter
		MCI: PDASILVA; CIS: 70216,1076