[comp.lang.c] Turbo C large character array

gordon@osiris.cso.uiuc.edu (John Gordon) (07/28/90)

	Hello.  I am having a problem, and I hope someone can help me.

	I am trying to declare a char array that is rather large, and TCC
won't accept it, it says it is too large to fit within available memory.
Bull!  I know for a fact that there is at least 300 or 400K left.  Anyway,
here is the declaration:

		char menu[1200][80];

	By my calculations, this shoud be equal to 96K worth.  I am using
the Huge model, which specifically says that arrays of more than 64K may be
used.

	Any help will be most appreciated.


---
John Gordon
Internet: gordon@osiris.cso.uiuc.edu        #include <disclaimer.h>
          gordon@cerl.cecer.army.mil       #include <clever_saying.h>
GEnie:    j.gordon14                  

price@glacier.unl.edu (Chad Price) (07/30/90)

In <1990Jul27.193520.4689@ux1.cso.uiuc.edu> gordon@osiris.cso.uiuc.edu (John Gordon) writes:


>	Hello.  I am having a problem, and I hope someone can help me.

>	I am trying to declare a char array that is rather large, and TCC
>won't accept it, it says it is too large to fit within available memory.
>Bull!  I know for a fact that there is at least 300 or 400K left.  Anyway,
>here is the declaration:

>		char menu[1200][80];

>	By my calculations, this shoud be equal to 96K worth.  I am using
>the Huge model, which specifically says that arrays of more than 64K may be
>used.

>	Any help will be most appreciated.


>---
>John Gordon
>Internet: gordon@osiris.cso.uiuc.edu        #include <disclaimer.h>
>          gordon@cerl.cecer.army.mil       #include <clever_saying.h>
>GEnie:    j.gordon14                  

I'm afraid you have to use malloc (calloc), rather than simply declare
the large array. Then it works fine.

Chad Price
price@fergvax.unl.edu

doug@ozdaltx.UUCP (Doug Matlock) (07/30/90)

In article <1990Jul27.193520.4689@ux1.cso.uiuc.edu>, gordon@osiris.cso.uiuc.edu (John Gordon) writes:
> 
> 	Hello.  I am having a problem, and I hope someone can help me.
> 
> 	I am trying to declare a char array that is rather large, and TCC
> won't accept it, it says it is too large to fit within available memory.
> 	By my calculations, this shoud be equal to 96K worth.  I am using
> the Huge model, which specifically says that arrays of more than 64K may be
> used.

Not quite.  The huge model "permits static data to total more than 64K. it must
still be less than 64K in *each* module". (pg 199 in the "new" manuals TC++).

If all you need is a very large array, and you always plan to access its
elements through a pair of indicies, I suggest you use the tack taken in
"Numerical Recipes in C".  A character "array" is defined as
char **x;

and allocated on the heap as

x = (char **)malloc(num_rows*sizeof(char));
for (i=0; i<num_rows, i++) x[i] = (char *)malloc(num_cols_in_row[i]*sizeof(char));

I have used this the great effect in many situations where I needed large
arrays.  It is also very effective when each row has a different number
of elements (i.e. I don't really have a true array, but double key access
to some collection of elements.)


-- 
Doug.

"If you want Peace, work for Justice."

bright@Data-IO.COM (Walter Bright) (07/31/90)

In article <1990Jul27.193520.4689@ux1.cso.uiuc.edu> gordon@osiris.cso.uiuc.edu (John Gordon) writes:
<	I am trying to declare a char array that is rather large, and TCC
<won't accept it, it says it is too large to fit within available memory.
<		char menu[1200][80];
<	By my calculations, this shoud be equal to 96K worth.  I am using
<the Huge model, which specifically says that arrays of more than 64K may be
<used.

The huge model is not necessary. Try rewriting it as:
	char *menu[1200];
Initialize as:
	for (i = 0; i < 1200; i++)
		menu[i] = (char *) malloc(80);
Use as:
	menu[i][j]

Look, ma, no huge pointers! I also think you'll find that this executes
faster than huge model, because no pointer normalization is necessary.

For space efficiency, you might also consider reversing the array indices.

gordon@osiris.cso.uiuc.edu (John Gordon) (07/31/90)

	Well, I managed to solve my problem, with thanks to all who posted and 
e-mailed me suggestions.  The final solution:

	char huge menu[1200];

	for(i = 0; i < 1200; i++)
		menu[i] = farmalloc(80);

Again, thanks to all who responded.


---
John Gordon
Internet: gordon@osiris.cso.uiuc.edu        #include <disclaimer.h>
          gordon@cerl.cecer.army.mil       #include <clever_saying.h>
GEnie:    j.gordon14                  

mayne@VSSERV.SCRI.FSU.EDU (William (Bill) Mayne) (07/31/90)

In article <1990Jul30.204053.28769@ux1.cso.uiuc.edu> gordon@osiris.cso.uiuc.edu (John Gordon) writes:
>
>	Well, I managed to solve my problem, with thanks to all who posted and 
>e-mailed me suggestions.  The final solution:
>
>	char huge menu[1200];
>
>	for(i = 0; i < 1200; i++)
>		menu[i] = farmalloc(80);
>

Don't be so hasty about the "final solution." 
There must  be a better way than using 1200 separate calls to
malloc or farmalloc! In addition to the time (which admittedly
may not be too much of a concern since you only do this once)
you should be aware that in most implementations each malloc
incurs memory overhead in addition to the storage requested.
The system must keep track of all those separate allocations
so that when you do the frees later he knows the associated
length. The minimum overhead on a PC (which I assume you 
are using from your description) is usually 16 bytes. You'd do 
better to allocate a big block and set your own pointers to the
individual elements. Something like this:

char *hugeblock, *block[1200];
hugeblock=malloc(1200*80);
for (i=0; i<1200; ++i)
  block[i]=hugeblock+80*i;
/* rest of your code goes here */
free(hugeblock);

This assumes the huge memory model, but with some slight variation
you could get by with large, allocating the amount you need in
pieces <64K but setting the pointers in block as one array.
Also, this is not quite as efficient as it could be because
I wanted it to be as clear as possible without long explanations.
But I think you can get the idea from this.

manning@gap.caltech.edu (Evan Marshall Manning) (07/31/90)

doug@ozdaltx.UUCP (Doug Matlock) writes:


>If all you need is a very large array, and you always plan to access its
>elements through a pair of indicies, I suggest you use the tack taken in
>"Numerical Recipes in C".  A character "array" is defined as
>char **x;

>and allocated on the heap as

>x = (char **)malloc(num_rows*sizeof(char));
			      ^^^^^^^^^^^^ try sizeof(char *)!!!!
>for (i=0; i<num_rows; i++)
	x[i] = (char *)malloc(num_cols_in_row[i]*sizeof(char));

A char is rarely big enough to hold a char *.

-- Evan

***************************************************************************
Your eyes are weary from staring at the CRT for so | Evan M. Manning
long.  You feel sleepy.  Notice how restful it is  |      is
to watch the cursor blink.  Close your eyes.  The  |manning@gap.cco.caltech.edu
opinions stated above are yours.  You cannot       | manning@mars.jpl.nasa.gov
imagine why you ever felt otherwise.               | gleeper@tybalt.caltech.edu

jindak@surfside.sgi.com (Chris Schoeneman) (07/31/90)

In article <1990Jul30.204053.28769@ux1.cso.uiuc.edu>
 gordon@osiris.cso.uiuc.edu (John Gordon) writes:
>
>	Well, I managed to solve my problem, with thanks to all who posted and 
>e-mailed me suggestions.  The final solution:
>
>	char huge menu[1200];
>
>	for(i = 0; i < 1200; i++)
>		menu[i] = farmalloc(80);

Almost.  But

        char huge *menu[1200];

might work a little better.  Why not just malloc the whole thing?
'Huge' arrays can be larger than 64K, they just can't be static.

        char huge *menu;

        menu=(char*) farmalloc(1200*80);

should do what you want.

	       Chris Schoeneman | I was neat, clean, shaved and sober,
    jindak@surfside.esd.sgi.com | and I didn't care who knew it.
	 Silicon Graphics, Inc. |		-Raymond Chandler
	      Mountain View, CA |		 (The Big Sleep)

gordon@osiris.cso.uiuc.edu (John Gordon) (07/31/90)

jindak@surfside.sgi.com (Chris Schoeneman) writes:

>In article <1990Jul30.204053.28769@ux1.cso.uiuc.edu>
> gordon@osiris.cso.uiuc.edu (John Gordon) writes:
>>
>>	Well, I managed to solve my problem, with thanks to all who posted and 
>>e-mailed me suggestions.  The final solution:
>>
>>	char huge menu[1200];
>>
>>	for(i = 0; i < 1200; i++)
>>		menu[i] = farmalloc(80);

>Almost.  But

>        char huge *menu[1200];

	Oops.  Typo.  I did use *menu[1200].


---
John Gordon
Internet: gordon@osiris.cso.uiuc.edu        #include <disclaimer.h>
          gordon@cerl.cecer.army.mil       #include <clever_saying.h>
GEnie:    j.gordon14                  

donp@na.excelan.com (don provan) (08/01/90)

In article <manning.649438172@gap> manning@gap.caltech.edu (Evan Marshall Manning) writes:
>doug@ozdaltx.UUCP (Doug Matlock) writes:
>>x = (char **)malloc(num_rows*sizeof(char));
>			      ^^^^^^^^^^^^ try sizeof(char *)!!!!

Actually, i prefer

	char **x;
	x = (char **)malloc( num_rows * sizeof *x );

Since the type of x has already been defined, why duplicate that
information in the malloc?  In general, i avoid sizeof(type), since
there's almost always a specific example of type at hand, and normally
it is in fact that specific example i want to know the size of, not
the generic type.  (And, of course, i'd *never* use a name like "x"...)

						don provan
						donp@novell.com

gdtltr@freezer.it.udel.edu (Gary Duzan) (08/01/90)

In article <1627@excelan.COM> donp@novell.com (don provan) writes:
=>In article <manning.649438172@gap> manning@gap.caltech.edu (Evan Marshall Manning) writes:
=>>doug@ozdaltx.UUCP (Doug Matlock) writes:
=>>>x = (char **)malloc(num_rows*sizeof(char));
=>>			      ^^^^^^^^^^^^ try sizeof(char *)!!!!
=>
=>Actually, i prefer
=>
=>	char **x;
=>	x = (char **)malloc( num_rows * sizeof *x );
=>
=>Since the type of x has already been defined, why duplicate that
=>information in the malloc?  In general, i avoid sizeof(type), since
=>there's almost always a specific example of type at hand, and normally
=>it is in fact that specific example i want to know the size of, not
=>the generic type.  (And, of course, i'd *never* use a name like "x"...)
=>
   I think sizeof(type) is probably more popular (with me, at least) because
it keeps the function call/macro "look and feel". It seems strange to have an
identifier-looking operator when you are used to having funky symbol
combination operators. For similar reasons I will tend to use "return(foo);"
rather than "return foo;".
   Followups to alt.religion.computers.

                                        Gary Duzan
                                        Time  Lord
                                    Third Regeneration


p.s. But then again, I indent with tabs, so what could I know? :-)

p.p.s. I won't use "x" either, but I'm not beyond using "tmp" (though usually
as "tmpint", "tmppint", "tmpfoobar", etc.)

                                         GD,TL,TR



--
                          gdtltr@freezer.it.udel.edu
   _o_                    --------------------------                      _o_
 [|o o|] If you can square, round, or cube a number, why not sphere it? [|o o|]
  |_O_|         "Don't listen to me; I never do." -- Doctor Who          |_O_|

bright@Data-IO.COM (Walter Bright) (08/02/90)

In article <332@sun13.scri.fsu.edu> mayne@VSSERV.SCRI.FSU.EDU (William (Bill) Mayne) writes:
<In article <1990Jul30.204053.28769@ux1.cso.uiuc.edu> gordon@osiris.cso.uiuc.edu (John Gordon) writes:
<<The final solution:
<<	char huge menu[1200];
<<	for(i = 0; i < 1200; i++)
<<		menu[i] = farmalloc(80);

Neither the huge nor the farmalloc are necessary. Try this:
	char *menu[1200];
	for(i = 0; i < 1200; i++)
		menu[i] = (char *) malloc(80);

<There must  be a better way than using 1200 separate calls to
<malloc or farmalloc! In addition to the time
<you should be aware that in most implementations each malloc
<incurs memory overhead in addition to the storage requested.
<The minimum overhead on a PC is usually 16 bytes.

I don't know about other compilers, but for Zortech the overhead for malloc
is 2 bytes. For farmalloc, the overhead is whatever the DOS overhead is
for allocating segments.

<You'd do better to allocate a big block and set your own pointers to the
<individual elements. Something like this:
<
<char *hugeblock, *block[1200];
<hugeblock=malloc(1200*80);
<for (i=0; i<1200; ++i)
<  block[i]=hugeblock+80*i;
</* rest of your code goes here */
<free(hugeblock);

Note that on a PC with 16 bit ints, 1200*80 == 30464 (!). This program
is going to crash horribly. You need to malloc a series of chunks, the
size of each is some multiple of 80 and less than 64k.

[Inadvertent overflowing of 16 bit ints in intermediate values is a
very common source of bugs in PC programs. I see a lot of it, and make
the same mistakes myself.]

steve@taumet.com (Stephen Clamage) (08/02/90)

donp@na.excelan.com (don provan) writes:

>In general, i avoid sizeof(type), since
>there's almost always a specific example of type at hand, and normally
>it is in fact that specific example i want to know the size of, not
>the generic type.

In general, I agree, but there is one pitfall with this:

typedef int datatype[20];

int foo(datatype d)
{
    ... sizeof(d) ...
}

The type of d is silently coerced to int*, and sizeof(d) becomes
sizeof(int*), which is not what you want.
-- 

Steve Clamage, TauMetric Corp, steve@taumet.com

alf@xenon.stgt.sub.org (Ingo Feulner) (08/03/90)

In article <price.649352434@glacier>, Chad Price writes:
>In <1990Jul27.193520.4689@ux1.cso.uiuc.edu> gordon@osiris.cso.uiuc.edu (John Gordon) writes:
>>		char menu[1200][80];
>I'm afraid you have to use malloc (calloc), rather than simply declare
>the large array. Then it works fine.
>
>Chad Price
>price@fergvax.unl.edu

But doesn't say the ANSI standard that malloc() mustn't allocate more than
64K once a time? (so says my Lattice C manual)

-Ingo.

--
                     Ingo Feulner - alf@xenon.stgt.sub.org
     Wolfacher Weg 22 - 7030 Boeblingen - (+49) 7031 272691 - West Germany
                    Love your enemies. It'll make 'em crazy.
                          AMIGA - the only way to go!

farrell@onedge.enet.dec.com (Bernard Farrell) (08/06/90)

In article <17ac63d1.ARN02634@xenon.stgt.sub.org>, alf@xenon.stgt.sub.org (Ingo Feulner) writes...
> 
>But doesn't say the ANSI standard that malloc() mustn't allocate more than
>64K once a time? (so says my Lattice C manual)

'Fraid not.  While 64K might be a reasonable restriction on older PC systems,
it would kill me on VAXen, etc.  Section 4.10.3 of The Standard has no mention
of Implementation Limits.


Bernard Farrell                | 
  farrell@onedge.enet.dec.com  | Strange but true:
 often on the move so try      |   These thoughts are my own invention,
 home: (617) 332-6203          |   I wish I could blame someone else for
                               |   them !!

manning@coil.caltech.edu (Evan Marshall Manning) (08/06/90)

farrell@onedge.enet.dec.com (Bernard Farrell) writes:

>In article <17ac63d1.ARN02634@xenon.stgt.sub.org>, alf@xenon.stgt.sub.org (Ingo Feulner) writes...
>> 
>>But doesn't say the ANSI standard that malloc() mustn't allocate more than
>>64K once a time? (so says my Lattice C manual)

>'Fraid not.  While 64K might be a reasonable restriction on older PC systems,
>it would kill me on VAXen, etc.  Section 4.10.3 of The Standard has no mention
>of Implementation Limits.

I believe malloc() takes size_t = unsigned int.  On systems where int is
16 bits 64K is indeed the limit.  On some DOS systems i believe you can
set int size to 32 bits with a compiler switch, but I'm not sure if malloc()
is smart enough to take advantage.

-- Evan

***************************************************************************
Your eyes are weary from staring at the CRT for so | Evan M. Manning
long.  You feel sleepy.  Notice how restful it is  |      is
to watch the cursor blink.  Close your eyes.  The  |manning@gap.cco.caltech.edu
opinions stated above are yours.  You cannot       | manning@mars.jpl.nasa.gov
imagine why you ever felt otherwise.               | gleeper@tybalt.caltech.edu

funkstr@ucscb.UCSC.EDU (Larry Hastings) (08/07/90)

+-In article <17ac63d1.ARN02634@xenon.stgt.sub.org>, alf@xenon.stgt.sub.org (Ingo Feulner) wrote:-
+----------
|
| But doesn't say the ANSI standard that malloc() mustn't allocate more than
| 64K once a time? (so says my Lattice C manual)
|
+----------

No, not really.  It's a limitation of the Intel processor's segmented
architecture that you (and I) spend our days cursing -- the 64k segment size
makes it stupefyingly difficult to access more than 64k at one time.  (Some
DOS compilers support "halloc", or "huge alloc" calls, which break this 64k
barrier with a huge code overhead.)  More powerful systems with flat address
spaces allow allocations of megabytes at a time, and I would theorize that
there are mainframe programs which allocate _gigabytes_ at times.

If you have any more questions about the ANSI standard, you should buy a
reference on ANSI C.  The draft itself is available, as are many good (and
cheaper) reference books.  I don't mean to be offensive, but the _worldwide_
Usenet is really not the place to ask questions so easily answered by a good
reference book.

--
larry hastings, the galactic funkster, funkstr@ucscb.ucsc.edu

I don't speak for Knowledge Dynamics or UC Santa Cruz, nor do they speak for me
"Cocaine is God's way of telling you you're making too damn much money"
		--Robin Williams

diamond@tkou02.enet.dec.com (diamond@tkovoa) (08/07/90)

In article <manning.649959245@coil> manning@coil.caltech.edu (Evan Marshall Manning) writes:

>I believe malloc() takes size_t = unsigned int.  On systems where int is
>16 bits 64K is indeed the limit.

size_t = unsigned some_integral_type.  On systems where int is 16 bits, size_t
might be 128 bits although the limit might be 2**64 instead of 2**128.  On
systems where int is 64 bits, size_t might be 32 bits.  If you want to know
the limit on your machine, you have to RTFM.  Calculations will not tell you.
-- 
Norman Diamond, Nihon DEC     diamond@tkou02.enet.dec.com
This is me speaking.  If you want to hear the company speak, you need DECtalk.

will.summers@p6.f18.n114.z1.fidonet.org (will summers) (08/07/90)

In article <1990Jul27.193520.4689@ux1.cso.uiuc.edu> gordon@osiris.cso.uiuc.edu
(John Gordon) writes:

 > I am trying to declare a char array that is rather large, and TCC
 > won't accept it, it says it is too large to fit within
 > available memory.  Bull!  I know for a fact that there is at
 > least 300 or 400K left.  Anyway, here is the declaration:
 >
 >                 char menu[1200][80];

   WARNING: Kludges C programers and compiler writers use
   to cope with the most prevalent hardware archetecture on the
   face of the earth are about to be discussed.  Not a few
   consider it the most twisted archetecture on the face of the
   earth but they probably exagerate a little.  Nevertheless,
   this topic has been known to make grown men cry.  Do not
   read on a full stomach.

The definition of the "huge" memory model varies.  For example, in
Microsoft C, the huge model allows objects over 64K in size.
It's been a long time since I MS C'ed, but if memory serves no
array element could be over 64K (I don't know if this outlaws
a[70000][4], but it does seem to preclude a[4][7000] ) and
large arrays had to have elements whose size was a power of
2 -- ie.  no

    struct x_ { char array[13] } a[10000];     /* WRONG in MSC */

Under Turbo C, the huge model means something quite different.  First,
under Turbo's large model you could have up to 1MB of data, but
only a _total_ of 64K of static data (the rest would have to be
malloc'ed or on the stack).  Under either large or huge models,
you have to be careful dealing with objects over 64K, either by
declaring pointers to such objects as "huge" or by being _very_
careful to ensure that segments do not "wrap" and that
normalization occurs before pointer comparisons.  (yucky in
the extreme).

The change to the huge model relaxes the restriction on a 64K
_total_ of static data, but still restricts the amount of static
data declared in any module (compile file) to 64K.  This
implies that any object of static data must be less than 64K.

Other compilers no doubt have other subtle differences in what is
and what ain't acceptable under their large and huge
models, and how they handle/not handle 'far' and 'huge' pointer
normalization and wrap.  (Does any '86 compiler handle
p +=70000;  in an "unsurprising" way when p is a 'far' pointer?)

There are a couple of ways to get arround this restriction.
The first **which I've never used** is to use farmalloc() and
'huge' pointers.  'huge' pointers are kept "always normalized" and
let you manipulate objects over 64K without having to worry about
"segment wrap".  (At least that is what is advertised:
remember, I've never used the damn things -- the performance
hit is reported to be devastating.  For example ++p generates
a _subroutine_ call!) I don't recommend using them in code you
write, but they may save the day when you are under time pressure to
get code imported from an non-'86 source to run.

A second way is to use farmalloc and far pointers.  Like in the
large model _you_ are responsible for seeing that the pointers
don't "wrap" and that they are normalized before comparisons
are made ( under some compilers, 2 far pointers can point to
the same object but compare _unequal_; under others, a pointer to
a[5] could compare as _less_ _than_ a pointer to a[3]. Neat.).  Not
recommended for any but the most careful, experienced coders, and
then only to hand-optimize code developed and checked out
using 'huge' pointers.  I've never done this either, and hope I'm
never tempted to, because there is a third alternative much cleaner:



I recommend taking a page from the past.  On early
machines there was no multiply instruction.  So multiplication
was done in subroutines.  This was very slow.  So compilers would
pre-calculate the offsets of the rows of an array at compile time
and store the results into an array.  The run-time would access
the array to compute multi-dimensional array offsets without
having to multiply.  These were called "dope vectors", possibly
because they made up for "dope" machines too dumb to be able to
multiply.  I suggest you "roll your own dope vectors" when
dealing with large arrays under the Intel archetecture.

Something along the following lines:

#define NUMLINES 1200
#define LINESIZE  80

  /* declare menu as an array of pointers to char:  */

 char *menu[NUMLINES];    /* You don't need the 'far' keyword if you are
                             compiling in the medium, large or huge models.
                             For portability I suggest leaving it out */

/*  malloc the individual rows of the array   */

   int line;

    for (line=0; line<NUMLINES; line++)  {
        menu[line] = (char *) malloc (LINESIZE * sizeof(char));
        if (menu[line] == (char *) NULL)    {
            punt("Not enough memory for menu\n");
            crapout();
        }
    }

    You can refer to individual bytes as menu[line][column] just as if
    menu were a two-dimensional array, even though menu is a
    single-dimensional array of pointers to 1200 80-character arrays of
    char.

    The performance hit is that you do 2 memory de-references per
    access, one near and one far.  I'd venture to say that on
    some compilers this is _faster_ than "real" two dimensional
    array access in the large model as it avoids a multiply at
    the expense of a near memory access.  (Anyone care
    to post some benchmark results?)

Note: Under this method code is portable to environments
that never heard of memory segmentation without conditional
compilation and without changes to code or header files.  It is
also portable with far less chance of suprises to PC compilers
that have subtle, sometimes quite unexpected, differences in
the way they treat 'far' and 'huge' pointers, "large" arrays, and
large and huge memory models ( Admit it: would you think to
design your code to anticipate a "the size of array elements must
be a power of 2" restriction if no one told you of a compiler
that had such a restriction?)

One difference between this and a "real" two dimensional array is
you can't play pointer punning "games" with the object like
assigning a pointer to char to &menu[0][0] then trying to
increment it through all 96000 elements...  but then you'd not do
anything like that, now, would you?  (If you would, never admit
it!)

A final note: at the expense of portablility, this is a
reasonable way to handle large amounts of data in a small or
compact model program.  You declare menu as an array of _far_
pointers to char.  You use farmalloc (and farfree) instead of
malloc, and you cast the result of malloc to _far_ pointer to
char.  The parts of the program that do not refer to the far data
will run a good deal faster.

                                  \/\/ill

PS.  Ain't the '86 archetecture wunnerful.


 

--  
Uucp: ...{gatech,ames,rutgers}!ncar!asuvax!stjhmc!18.6!will.summers
Internet: will.summers@p6.f18.n114.z1.fidonet.org

colin@array.UUCP (Colin Plumb) (08/09/90)

In article <17ac63d1.ARN02634@xenon.stgt.sub.org> alf@xenon.stgt.sub.org (Ingo Feulner) writes:
>But doesn't say the ANSI standard that malloc() mustn't allocate more than
>64K once a time? (so says my Lattice C manual)

If the ANSI committee had done something a tenth as brain-damaged, they'd
be getting a large bundle of Semtex from me in the mail.  The ANSI standard
gives only MINIMUMS; it does not forbid an implementation that pads
all objects to 4 gigabyte boundaries.

malloc (4.10.3.3) takes an argument of type size_t.
size_t (4.1.5) is the unsigned integral type of the result of the sizeof
operator.
The sizeof() operator (3.3.3.4) may be applied to any type or expression,
and so size_t must be able to hold the size of the largest object declarable
in the implementation.
A hosted implementation (i.e. one that has malloc) must be able to
trnaslate and execute a program that contains a 32767-byte object,
according to 2.2.4.1, so that number, at least, must be passable to
malloc.
An integral type (3.1.2.5) is one of:
char, signed char, unsigned char
short int, unsigned short int
int, unsigned int
long int, unsigned long int
an enumeration type.
All enumeration types have the same internal representation as some other
integral type, so we can basically ignore that.

Thus, size_t must be one of unsigned char, unsigned short, unsigned int,
or unsigned long.  These must be pure binary (have maxima of the form 2^n-1),
and n must be at least 8, 16, 16, and 32, respectively.  So it is possible
to have an implementation that uses 15-bit unsigned chars and the same
type for size_t, thus making it impossible for malloc to allocate more
memory than that.  In practice, size_t had better be at least as big as uint,
so malloc must take an argument up to 655535 (although it may return NULL).

So, you've got a minimum, but there's no maximum.  I can have a 64-bit
size_t if I like.
-- 
	-Colin