[comp.os.msdos.programmer] HELP!! Turbo-C++ FAR POINTERS

johnson@EA.USL.EDU (08/30/90)

I was trying to use a very large array (char  lines[1000][150];) using
Turbo-C++ on a 80-x86 based pc.  I realized I needed a particular type
of pointer to address this thing, so I used a FAR pointer.  However,
I did something wrong and real quickly compromised the integrity of the
FAT file on the hard-drive.  (not very fun, I couldn't even boot up)

Needless to say I don't want to do this again.  How can I declare and
address this array?

Any help would be GREATLY appreciated.

Lee Johnson       hlj@usl.edu

hp@vmars.tuwien.ac.at (Peter Holzer) (08/31/90)

johnson@EA.USL.EDU writes:

>I was trying to use a very large array (char  lines[1000][150];) using
>Turbo-C++ on a 80-x86 based pc.  I realized I needed a particular type
>of pointer to address this thing, so I used a FAR pointer.  However,
>I did something wrong and real quickly compromised the integrity of the
>FAT file on the hard-drive.  (not very fun, I couldn't even boot up)

>Needless to say I don't want to do this again.  How can I declare and
>address this array?

I don't know about TC++, but in TC 2.0, you cannot declare such an array,
you have to farmalloc it at run time. To address it you need a huge *,
not a far * (Far pointer arithmetic wraps around after 64k).

If you want your program to be 
a)	fast
	and
b)	portable to other compilers/systems,
leave the near, far, huge stuff out use the following code fragment:

--------------------------------------------------------------------
char	** lines;
int	i;

lines = malloc (1000 * sizeof (char *));
assert (lines);

for (i = 0; i < 1000; i ++) {
	lines [i] = malloc (150);
	assert (lines [i]);
}
--------------------------------------------------------------------

(This should really go into a FAQ list, the question is asked every few weeks)
--
|    _	| Peter J. Holzer			| Think of it	|
| |_|_)	| Technische Universitaet Wien		| as evolution	|
| | |	| hp@vmars.tuwien.ac.at			| in action!	|
| __/  	| ...!uunet!mcsun!tuvie!vmars!hp	|     Tony Rand	|

bright@Data-IO.COM (Walter Bright) (09/01/90)

In article <0093BF75.AA852DC0@EA.USL.EDU> johnson@EA.USL.EDU writes:
>I was trying to use a very large array (char  lines[1000][150];) using
>Turbo-C++ on a 80-x86 based pc.
>Any help would be GREATLY appreciated.

Here's a reply I sent to someone with a similar problem:

In article <1990Aug24.181159.19680@cs.columbia.edu> kearns@cs.columbia.edu (Steve Kearns) writes:
<* the compiler (Zortech) does not support huge pointers, meaning that it is
<extremely cumbersome (and non-portable) to access data structures
<larger than 64K bytes.  For example, an array of structures, each
<structure 16 bytes long, would be limited to 4000 entries.  I guess I
<will try making a C++ class to simulate a huge pointer, since the
<application I am porting requires huge pointers.

Let's say you want an array of 16000 16 byte structs. Here's how to do
it portably without the overhead of huge pointers:

	struct S far *arrayp[4];
	#define array(i) arrayp[(unsigned)i/4000][(unsigned)i%4000]

	/* Initialize: */
	for (i = 0; i < sizeof(array) / sizeof(array[0]); i++)
		array[i] = (struct S far *) malloc(sizeof(struct S) * 4000);

Adjusting the value 4000 so it is a power of 2 makes the *,/ and %
turn into shifts.
To use, array(i) instead of array[i]. It will work as both an lvalue
or an rvalue. I would argue that it is both more space efficient and is
much faster than using huge arithmetic. It also does not suffer from the
problems that you get with huge model if sizeof(struct S) does not divide
into 64k evenly.

I've helped several people convert their code to Zortech from using huge
pointers, and this method, or minor variations of it, usually works fine.
I hope this helps...

dfoster@jarthur.Claremont.EDU (Derek R. Foster) (09/12/90)

In article <0093BF75.AA852DC0@EA.USL.EDU> johnson@EA.USL.EDU writes:
>I was trying to use a very large array (char  lines[1000][150];) using
>Turbo-C++ on a 80-x86 based pc.  I realized I needed a particular type
>of pointer to address this thing, so I used a FAR pointer.  However,
>I did something wrong and real quickly compromised the integrity of the
>FAT file on the hard-drive.  (not very fun, I couldn't even boot up)
>
>Needless to say I don't want to do this again.  How can I declare and
>address this array?

The fact that you declared your array as
'far' was probably the cause of your downfall. When something is added
to a far pointer, the result may not exceed 64K + the segment part of the
far pointer. This is due to the segmented architecture of the CPU, and to
the fact that a far pointer's data is stored in two related fields in
the pointer, only one of which is involved in pointer arithmetic. The
solution is to declare your array as consisting of 'huge' characters,
in particular,

char huge lines[1000][150];

Technically, this declares lines as an array of 'huge char', which
wasn't allowable syntax until the latest version of Turbo. It allows
declaration of arrays > 64K.

(Remember: It's not
the pointer that is being declared as huge. It's the characters themselves.
The pointer is automatically huge by virtue of the fact that it is a pointer
to huge characters. Old versions of Turbo didn't allow 'far/huge objects',
only 'far/huge pointers')

(Other note: Declaring a non-pointer as far or huge also
means that it is assigned its own data segment, which can be useful sometimes
too. for instance:

  struct xyz
  {
    array[50000];
  };
  struct xyz far abc, far def, far ghi;

  allows you to declares several structures which when combined, exceed 64K,
  which is more than Turbo would normally allow you to put in one source
  file.
).

 Anyway, this is by far easier than dynamically
allocating your array as some other posters have suggested (although there
may be other benifits to dynamic allocation.)

'huge' pointers are like far pointers except that they invoke special
subroutines for pointer math which insure that the entire pointer (both
the segment and offset fields) are updated in pointer arithmetic. This
is slower than ordinary pointer arithmetic, so use huge pointers sparingly.
The advantage is that huge pointers may access data objects greater than
64K.

This feature is not documented well in the C++ manual, but it does exist
somewhere. If you poke around for a while, you can probably find it.

>
>Any help would be GREATLY appreciated.
>
>Lee Johnson       hlj@usl.edu

I hope this helps!

Derek R. Foster