[comp.sys.ibm.pc] Large linear memory space for 80x86 machines ?

dan@rna.UUCP (Dan Ts'o) (10/19/87)

	Sorry if this has been covered (it surely has) but...

	What is the current state of large data array usage on 80x86 machines ?
I want to write a (semi-real-time) C program that uses single data arrays that
are 2-8Mb each. What about...

	- Huge model on the MSC 4.0. Will this do the trick ? Sounds like it
will but the speed penalty may be huge (heehee). One program I moved from
small to large was slowed by 50%. But it may still be limited by the 640K
of MSDOS.

	- Running Xenix with protecte mode, possibility on a '386. Do any
current or planned implementation of Xenix support such large linear address
spaces ? How about virtual memory support ?

	- 68000 co-processor or other on the PC bus. I suppose this is an
option to gain large linear data spaces on the PC chassis.

	Any other ideas ? Thanks.

				Cheers,
				Dan Ts'o
				Dept. Neurobiology	212-570-7671
				Rockefeller Univ.	...cmcl2!rna!dan
				1230 York Ave.		rna!dan@nyu.arpa
				NY, NY 10021

dyer@spdcc.COM (Steve Dyer) (10/20/87)

In article <668@rna.UUCP>, dan@rna.UUCP (Dan Ts'o) writes:
> 	What is the current state of large data array usage on 80x86 machines ?
> I want to write a (semi-real-time) C program that uses single data arrays that
> are 2-8Mb each. What about...
> 	- Running Xenix with protecte mode, possibility on a '386. Do any
> current or planned implementation of Xenix support such large linear address
> spaces ? How about virtual memory support ?

XENIX 386 is a true virtual memory OS and it supports the 32-bit linear
addressing of the 386 chip; the C compiler generates "small model" 32-bit code
without any overt manipulation of segment registers.  I don't remember what
the absolute maximum user process size is right now (stemming from their
virtual memory design), but the more immediate limits are a function of
the amount of physical memory you have and the size of the swap space you
have allocated.  I believe that the heuristic is that a process can occupy
all of physical memory (less the kernel) plus some percentage of the total
swap space, where both of these are parameters that can be tweaked to
customize an installation to its workload characteristics.

On my machine, an AT with an Intel Inboard 386, the system boots with the
message: "maximum user process size = 7771K", this with 5888K of physical
memory, of which 4844K is available for user processes and a 10mb swap
area.  I haven't made any attempt to optimize process size or reduce my
use of swap, although it certainly could be done with the instructions from
SCO.  I suspect you'll have no problem with your application under XENIX 386.
-- 
Steve Dyer
dyer@harvard.harvard.edu
dyer@spdcc.COM aka {ihnp4,harvard,linus,ima,bbn,m2c}!spdcc!dyer

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (10/20/87)

In article <668@rna.UUCP> dan@rna.UUCP (Dan Ts'o) writes:
|	What is the current state of large data array usage on 80x86 machines ?
|I want to write a (semi-real-time) C program that uses single data arrays that
|are 2-8Mb each. What about...
	<< MSC huge model >>
|
|	- Running Xenix with protecte mode, possibility on a '386. Do any
|current or planned implementation of Xenix support such large linear address
|spaces ? How about virtual memory support ?

Xenix/386 currently has this. The current version actually only supports
the small model, but the 386 small model is 2GB segments, close to a
linear addressing space. Virtual memory is standard. Bell Technologies
also sells the AT&T certified port (for which they wrote many device
drivers) for $400 including C compiler. I haven't gotten mine yet, so I
have no idea how good it is. I would use Xenix if you can, the compiler
is *much* better than PCC.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {uunet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

clif@intelca.UUCP (Clif Purkiser) (10/21/87)

In article <668@rna.UUCP>, dan@rna.UUCP (Dan Ts'o) writes:
> 
> 	Sorry if this has been covered (it surely has) but...
> 
> 	What is the current state of large data array usage on 80x86 machines ?
> I want to write a (semi-real-time) C program that uses single data arrays that
> are 2-8Mb each. What about...
> 
> 	- Huge model on the MSC 4.0. Will this do the trick ? Sounds like it
> will but the speed penalty may be huge (heehee). One program I moved from
> small to large was slowed by 50%. But it may still be limited by the 640K
> of MSDOS.
> 
> 	- Running Xenix with protecte mode, possibility on a '386. Do any
> current or planned implementation of Xenix support such large linear address
> spaces ? How about virtual memory support ?
> 
> 	Any other ideas ? Thanks.

I think the best solution is to buy a 80386 PC or Intel 386 Inboard and
an PC AT.

Then purchase Pharlap DOS|extender product which allows programs to take
advantage of the 80386's large address space. 

For a C compiler by Metaware's High C-386 which supports 80386 "native 
mode" (i.e 4 gigabyte segments).

Phone numbers
Pharlap Software (617) 661 1510
MetaWare 	(408) 429-6382


Another possiblity is to buy Unix or Xenix 386 both support large
linear address spaces


-- 
Clif Purkiser, Intel, Santa Clara, Ca.
{pur-ee,hplabs,amd,scgvaxd,dual,idi,omsvax}!intelca!clif

These views are my own property.  However anyone who wants them can have 
them for a nominal fee.
	

domo@riddle.UUCP (Dominic Dunlop) (10/26/87)

In article <668@rna.UUCP> dan@rna.UUCP (Dan Ts'o) writes:
>I want to write a (semi-real-time) C program that uses single data arrays that
>are 2-8Mb each. What about...
>
>	- Running Xenix with protecte mode, possibility on a '386. Do any
>current or planned implementation of Xenix support such large linear address
>spaces ? How about virtual memory support ?

Having Compaq 386 machines running both Xenix/386 (release 2.2.1) and
386/ix (release 1.0.3) handy, I threw the following trivial program at
them:

char	blunge[0x400000];	/* 4 megabytes */

main()
{
	register char *bp;

	for (bp = blunge;
	     bp < blunge + sizeof blunge / sizeof *blunge;
	     bp++)
		*bp = (unsigned) bp & 0xff;
}

Both ran it.  When I upped the array size to 8 megabytes, 386/ix still ran
it, but XENIX complained (``cannot run...''), I suspect because it was too
large.  As the XENIX system had two megs of RAM, against six for 386/ix, I
don't think this is unreasonable.  In any event, maximum user process size
is sysgenable on both systems, and both will refuse to run programs if
there's a chance they may run out of swap.  The only difference is that
XENIX is very liable to panic if it runs out of swap, whereas 386/ix (and
V.3 in general) just whistles nonchalantly, in the hope that the problem
will resolve itself (which it generally does).

Getting bolder, I made blunge into a local variable inside main().  This
makes 386/ix dump core, and crashes XENIX (which, not knowing the root
password, I can't reboot successfully -- boy, am I going to be popular
tomorrow...).  For good measure, I tried it on a 3B2/400 (UNIX V,
release 3.0.2, four megs RAM).  It would run with a 4 meg external array,
and a 4 meg stack array, refused to run with an 8 meg external array, and
dumped core with an 8 meg stack array.  To summarise this lot:

				4 meg	4 meg	8 meg	8 meg
				extern	auto	extern	auto
				------  -----   ------  -----
3B2/400, 4 meg, UNIX V.3	runs	runs	no run	core

Compaq 386, 6 meg, 386/ix	runs	core	runs	core

Compaq 386, 2 meg, XENIX/386	runs	panic	no run	?

In short, the 386 UNIX implementations will do what you want, but you'd
better make your arrays external, not local (auto).  I strongly suspect
that the cause of the core dumps with auto variables is the
attempt to build a stack frame with stuff on it which is so far from the
last valid page allocated to the stack that the kernel thinks there's
something screwy going on when an attempt is made to reference it.  I'm
probably going to be accused of being mean and nasty to have tried anything
so unreasonable in the first place.

Dominic Dunlop
domo@riddle.uucp   domo@sphinx.co.uk