[comp.windows.ms] No free MS support for SDK; How to use lots of mem in a Win program?

burgoyne@eng.umd.edu (John R. Burgoyne) (08/19/90)

PART 1
^^^^^^
Microsoft no longer provides free technical support for questions relating to
the SDK other than its installation. It used to provide such support for
previous versions, < 3.0, of the SDK. (I know, I have gotten answers from
them in the past.)

Technical support over toll-free 800 numbers is not what we're talking about.

Persons are referred to subscribing to Microsoft online, at $495 a year for
"12 hours of connect time".

Quarterly reports from Microsoft indicate that it is profitable. (!!!!)
Again, Windows program development becomes more complex and available only to
the deeply pocketed. I realize that an infinite number of different questions
about the SDK could be asked, but no free support at all? Does anyone have
any suggestions about what the user group can do? Does anyone have any
experience with trying to get answers through Compu$erve?

PART 2
^^^^^^

The question I am trying to get answered is the following. How do we go about
using large amounts of memory in our programs we develop with SDK version
3.0? I am using MS C compiler 6.0, SDK version 3.0, Windows version 3.0. The
MS Windows SDK book "Tools" says on page 1-3 that the compact and large
models are not recommended for Windows programs because data segments of
programs created with compact or large models are fixed, and because only one
instance of such programs can be run. If anyone can answer any of the
following questions, I would be very happy if you do so.

Can one use the medium model and have data larger than 64K if it is global?

Can one use the medium model and have global arrays larger than 64K?

Is it ever possible to have arrays larger than 64K?

What is the general strategy for using lots of memory in a Windows program?

Has anyone written a program which uses more than 640K? What are you doing
and how did you do it?

*-----------------------------------------------------------------------------*
| Robert Burgoyne                     CALCE Center for Electronics Packaging  |
| Industrial Liaison                  University of Maryland                  |
| burgoyne@eng.umd.edu                College Park, MD  20742                 |
| (301)-454-0348                      USA                                     |
| Compu$erve: 76234,2425                                                      |
|                                                                             |
|      "Improving the quality of electronics hardware through software        | 
|         development and research into physics of failure models."           |
*-----------------------------------------------------------------------------*
 

kensy@microsoft.UUCP (Ken SYKES) (08/20/90)

In article <1990Aug19.132046.24146@eng.umd.edu> burgoyne@eng.umd.edu (John R. Burgoyne) writes:
>PART 2
>^^^^^^
>
>The question I am trying to get answered is the following. How do we go about
>using large amounts of memory in our programs we develop with SDK version
>3.0? I am using MS C compiler 6.0, SDK version 3.0, Windows version 3.0. The
>MS Windows SDK book "Tools" says on page 1-3 that the compact and large
>models are not recommended for Windows programs because data segments of
>programs created with compact or large models are fixed, and because only one
>instance of such programs can be run. If anyone can answer any of the
>following questions, I would be very happy if you do so.
>
>Can one use the medium model and have data larger than 64K if it is global?
>
>Can one use the medium model and have global arrays larger than 64K?

The answer to the first two is no.  A medium model program has a single 
data segment.

>
>Is it ever possible to have arrays larger than 64K?
>

Well, yes sort of if you use a huge pointer to a chunk of allocated
memory (or you use compact/large which is bad - this will limit your
program to one instance!!!)  See the next question.

>What is the general strategy for using lots of memory in a Windows program?
>

The general strategy is to use GlobalAlloc to allocate the memory you need.
The memory should be allocated moveable to avoid heap fragmentation.  I
think the SDK suggests that it is okay to allocate fixed memory if you are
running in protect mode (since kernel can change the selector base without
changing the selector.)  This gives you the advantage of not having to
lock/unlock memory all the time BUT if you want your program to run in
real mode this approach should not be used.  I will give two examples of
allocating a large array - one with moveable memory, one with fixed.

Moveable:

int huge *hpArray;
HANDLE hArray;

hArray = GlobalAlloc(GHND, 128000L);
hpArray = (int huge *) GlobalLock(hArray);

hpArray[10] = 20;

GlobalUnlock(hArray);
GlobalFree(hArray);
hpArray = NULL;

Fixed:

int huge *hpArray;

hpArray = (int huge *) GlobalLock(GlobalAlloc(GHND, 128000L));

hpArray[10] = 20;

GlobalUnlock(HIWORD((DWORD) hpArray));
GlobalFree(HIWORD((DWORD) hpArray));

There are a few differences to note in this second piece of code:

- The handle was not retained from GlobalAlloc.  The Global functions can
also take selectors so we pass the selector to the GlobalUnlock / GlobalFree
functions.

- The memory was initially allocated moveable (GHND) instead of fixed (GPTR.)
This is done because fixed memory is PAGE LOCKED in Windows 3.0.  This means
that the chunk of memory cannot be demand paged to disk - it always sits in
physical memory.  Needless to say allocating large chunks of page locked
memory will bring a system to its knees.  So what this code does instead is
allocate it as moveable and immediately locks it.  This avoids the page
lock problem.

- There doesn't seem to be any less work in the second piece of code.  Well,
the difference is in the fixed case you allocate it the first time it is
needed and then free it when your done - no global locks/unlocks inbetween.
People who write Windows programs know about how tedious locking and
unlocking memory all the time can be.

- Notice that I didn't check to see if the GlobalAlloc call succeeded.  THIS
IS BAD BAD BAD!!! In a real program always check memory allocations for
success.  It was left out to keep the example simple.

Another thing to consider is whether some of the memory you allocate can
be discardable.  This means that the contents of the buffer can be thrown
out if nobody has locked the memory down.  This is often the case for work
buffers.  Making them discardable allows windows to free up more memory if
it needs to.  This will add a little overhead to your program:  Whenever a
function wants to access the memory it first checks to see if it is present,
allocates it if it isn't, then locks it down.  The memory is then unlocked
when it is no longer needed.  More effort yes but it will make you a better
neighbor.  Just because Windows has protect mode doesn't mean apps should  
go allocating huge chunks of memory...
                    
>Has anyone written a program which uses more than 640K? What are you doing
>and how did you do it?
>

I've written some apps that munge DIBs.  It loads the entire DIB into memory,
chews on it, and spits it out to another DIB in memory, possibly with 
intermediate DIBs generated inbetween.  This can be quite a large amount of
memory, esp. with 24 bit dibs.  The technique I used is the fixed memory
case mentioned above.  The programs are quick and dirty and don't need to
run in real mode.


>*-----------------------------------------------------------------------------*
>| Robert Burgoyne                     CALCE Center for Electronics Packaging  |
>| Industrial Liaison                  University of Maryland                  |
>| burgoyne@eng.umd.edu                College Park, MD  20742                 |
>| (301)-454-0348                      USA                                     |
>| Compu$erve: 76234,2425                                                      |
>|                                                                             |
>|      "Improving the quality of electronics hardware through software        | 
>|         development and research into physics of failure models."           |
>*-----------------------------------------------------------------------------*
> 

Ken Sykes
No fancy signature, standard disclaimer, etc.

prk@planet.bt.co.uk (Peter Knight) (08/21/90)

burgoyne@eng.umd.edu (John R. Burgoyne) writes:


>The question I am trying to get answered is the following. How do we go about
>using large amounts of memory in our programs we develop with SDK version
>3.0? I am using MS C compiler 6.0, SDK version 3.0, Windows version 3.0. The
>MS Windows SDK book "Tools" says on page 1-3 that the compact and large
>models are not recommended for Windows programs because data segments of
>programs created with compact or large models are fixed, and because only one
>instance of such programs can be run. If anyone can answer any of the
>following questions, I would be very happy if you do so.

>Can one use the medium model and have data larger than 64K if it is global?

No, the compiler will have no code to change the ds register.  However, your
program can have more 64K data by using both global and stack data and
by using calls to the far malloc, if available.

>Can one use the medium model and have global arrays larger than 64K?

Yes and no.  You can only manipulate these arrays if they are gotten 
by using the far malloc call.  They will have to be accessed by pointers
declared huge.  I cannot give a definitive answer if these services will work
check all your documentation for memory allocation (likely to be different)
in windows and also for huge data types and pointers in the C manuals.

>Is it ever possible to have arrays larger than 64K?

Yes, see rest of reply.

>What is the general strategy for using lots of memory in a Windows program?

You could use the expanded memory manager.

>Has anyone written a program which uses more than 640K? What are you doing
>and how did you do it?

Use OS/2!

Peter Knight 

BT Research
Tel +44 473 644108

#include <std.disclaimer>

mojo@netcom.UUCP (Morris Jones) (08/23/90)

Yes you can use large chunks of memory in Windows 3.0.  I've been doing
it successfully for a while now.

To allocate chunks larger than 64K, use GlobalAlloc(), and lock to a
pointer declared as "huge".  The C compiler (at least Microsoft's) will
properly increment the pointer using a external variable named __ahincr
to increment the selector number to point to the next 64K chunk.  Just
don't try doing pointer comparisons or convert a Selector:Offset address
into a physical address.  (They call this selector tiling.)

You can also use 32-bit programs in Windows 3.0 using the WINMEM32 DLL.
Course, this requires a true 32-bit compiler such as Watcom or Metaware.

If you want to write some assembler to do memory references, you can allocate
some 32-bit memory using WINMEM32 and use 32-bit offsets to reference it.
Or use Global16PointerAlias() to get a 16-bit selector into 32-bit memory.

The possibilities are endless.  But if all you've done in the past is DOS
programs, I highly recommend Intel's 80386 Programmer's Guide.  Get familiar
with how this stuff works.

Mojo
Caere Corp.

-- 
mojo@netcom.UUCP          Site Coordinating Instructor, San Jose South
Morris "Mojo" Jones       Skilled Motorcycling And Rider Training (S.M.A.R.T.)
Campbell, CA              800-675-5559 ... 800-CC-RIDER