[comp.unix.wizards] shmat

jeff@astph.UUCP (8592x2) (08/16/90)

Question concerning the shared memory attach call:

I am writing a shared memory allocation manager for a multi-user
database. This manager will allow several processes to be attached
to the same memory segment at the same time. The first process to
attach to the shared memory segment will be returned a memory address
that points to the shared memory block. 

I need to know if additional attaches by other processes will be
guaranteed to return the same address as that the first process
was returned. I am aware that you can request a particular address,
but why bother communicating that information between the processes
if the same address is returned anyway? I would appreciate any
answers or direction to documentation.

Thanks		jeff martin
		astph!jeff@psuvax1
		philadelphia phillies

wht@n4hgf.Mt-Park.GA.US (Warren Tucker) (08/16/90)

In article <27@astph.UUCP> jeff@astph.UUCP (8592x2) writes:
>
>Question concerning the shared memory attach call:
>
>I am writing a shared memory allocation manager for a multi-user
>database.
>I need to know if additional attaches by other processes will be
>guaranteed to return the same address as that the first process
>was returned.

To be sure, specify the attach address, regardless of what the FM says.
Make a small program that passes 0 for the address and see what it
returns.  Then, use that value hardcoded, possibly #defined for each
arcitecture you plan to run the program on.

I.E.,
/*---------------------- header file --------------------*/
#if defined(M_I286)
#define SHMPTR  char far *
#define SYSPTR  struct system_control *
#else
#define SHMPTR  char *
#define SYSPTR  struct system_control *
#endif

#if defined(M_SYS5)
#if defined(M_I386)
#define SHM_ATTACH_ADDR     ((SHMPTR)0x67000000L)   /* 386 */
#else
#define SHM_ATTACH_ADDR     ((SHMPTR)0x00670000L)   /* 286 */
#endif
#else /* not xenix */
#ifdef (pyr)
#define SHM_ATTACH_ADDR     ((SHMPTR)0xC0000000L)   /* PYRAMID */
#else
#define SHM_ATTACH_ADDR     ErrorInHeaderFile
#endif
#endif

/*---------------------- code file --------------------*/

    if((sys = shmat(*pshmid,SHM_ATTACH_ADDR,0)) !=
        (SHMPTR)SHM_ATTACH_ADDR)
    {
        /* attach error: either returned (SHMPTR)-1 or wrong address */
    }
 
-----------------------------------------------------------------------
Warren Tucker, TuckerWare   gatech!n4hgf!wht or wht@n4hgf.Mt-Park.GA.US
"Tell the moon; don't tell the March Hare: He is here. Do look around."

gt0178a@prism.gatech.EDU (BURNS,JIM) (08/16/90)

in article <27@astph.UUCP>, jeff@astph.UUCP (8592x2) says:
> 
> 
> Question concerning the shared memory attach call:
> 
> I am writing a shared memory allocation manager for a multi-user
> database. This manager will allow several processes to be attached
> to the same memory segment at the same time. The first process to
> attach to the shared memory segment will be returned a memory address
> that points to the shared memory block. 
> 
> I need to know if additional attaches by other processes will be
> guaranteed to return the same address as that the first process
> was returned. I am aware that you can request a particular address,
> but why bother communicating that information between the processes
> if the same address is returned anyway? I would appreciate any
> answers or direction to documentation.

I don't see why not. The shmget(2) routine specifies the memory block
size. All the shmat(2) routine does is return a pointer to the beginning
of that block (by default). The same block is returned to different
processes if they use the same shmid returned by shmget(2). Adapted from
the HP 9000/800 HP-UX Real Time Programmers Manual:

On shmget(2):

"If your communication application consists of related processes, you
should call shmget(2) with the key parameter set to IPC_PRIVATE in the
following way:

   myshmid = shmget (IPC_PRIVATE, 4096, 0600);

"This call to shmget(2) returns a unique shared memory identifier (shmid),
which is saved in the variable myshmid, for the newly created shared
memory segment. The size of the segment is 4096 bytes and its access
permissions are read and write permission for the owner. This call to
shmget(2) should appear in your program sometime before the fork()
statement so that the child processes in your communication application
will inherit myshmid.

"If your communication application consists of unrelated processes, you
should call shmget(2) with the key parameter set to the return value of
the ftok() subroutine [or just use an ascii representation of a 4
character string that you know will be unique. - JB ] [...] As an example,
all unrelated processes in a communication application can call shmget(2)
in the following [altered - JB ] way:"

   myshmid = shmget (0x50485331, 4096, IPC_CREAT|600);

to use a key of "PHS1".

On shmat(2):

"Once a process has a valid shmid, it usually wants to attach, perhaps to
lock, to read and/or to write to, and then to detach the shared memory
segment. [...]

"A process must invoke the shmat() system call to attach a shared memory
segment to the data space of the process. The man page for shmop(2) lists
three parameters for shmat(): shmid, shmaddr, and shmflg.

"The first parameter, shmid, must be a valid shared memory identifier as
explained in the previous section.

"The second parameter, shmaddr, is the attach address of the shmid
parameter. Parameter shmaddr should be 0 in almost all cases. Only at
certain times and only in certain implementations of HP-UX can shmaddr be
other than 0. If a previous shmat() has not been called on the shmid; that
is, if the shared memory segment has not already been attached, then the
only correct value for shmaddr is 0. If, however, some process has already
called shmat() on the specified shmid, then the shmaddr can be 0 or some
other implementation - dependent value. [...] [As it turns out, non-zero
parameters aren't supported at all on the model 800 architechture - only
the 300. -JB ]

"The third parameter of shmat(), shmflg, is used only to further restrict
the owner's access permissions to the shared memory segment. [...]"

Hope this answers your question.
-- 
BURNS,JIM
Georgia Institute of Technology, Box 30178, Atlanta Georgia, 30332
uucp:	  ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!gt0178a
Internet: gt0178a@prism.gatech.edu

gt0178a@prism.gatech.EDU (BURNS,JIM) (08/16/90)

in article <187@n4hgf.Mt-Park.GA.US>, wht@n4hgf.Mt-Park.GA.US (Warren Tucker) says:
< In article <27@astph.UUCP> jeff@astph.UUCP (8592x2) writes:
< To be sure, specify the attach address, regardless of what the FM says.
< Make a small program that passes 0 for the address and see what it
< returns.  Then, use that value hardcoded, possibly #defined for each
< arcitecture you plan to run the program on.

What if yours is not the only application creating and deleting shared
memory segments? Are you saying you always get the same address?
~
~
~
~
~
~
~
~
~
"/usr/tmp/vn002876" 13 lines, 646 characters
still want to post it ? n
not posted
:more (34%):
-- 
BURNS,JIM
Georgia Institute of Technology, Box 30178, Atlanta Georgia, 30332
uucp:	  ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!gt0178a
Internet: gt0178a@prism.gatech.edu

thomas@uplog.se (Thomas Tornblom) (08/16/90)

In article <187@n4hgf.Mt-Park.GA.US> wht@n4hgf.Mt-Park.GA.US (Warren Tucker) writes:

   In article <27@astph.UUCP> jeff@astph.UUCP (8592x2) writes:
   >
   >Question concerning the shared memory attach call:
   >
   >I am writing a shared memory allocation manager for a multi-user
   >database.
   >I need to know if additional attaches by other processes will be
   >guaranteed to return the same address as that the first process
   >was returned.

   To be sure, specify the attach address, regardless of what the FM says.
   Make a small program that passes 0 for the address and see what it
   returns.  Then, use that value hardcoded, possibly #defined for each
   arcitecture you plan to run the program on.

[example deleted]

This is not guaranteed to work. Typically the kernel allocates the addresses
depending of the memory layout of the running process.

Our sysV.2 68k kernel uses the current end of bss rounded up with some
constant as the lowest base for shm. It also checks that the segment doesn't
overlap into the stack or other shared memory segments.

If you must have the same addresses between the processes (which is nice for
pointers and stuff) I'd pick some high constant address, say 0x[48c]0000000
or so that isn't likely to map onto anything on the architectures you're using.

Thomas
-- 
Real life:	Thomas Tornblom		Email:	thomas@uplog.se
Snail mail:	TeleLOGIC Uppsala AB		Phone:	+46 18 189406
		Box 1218			Fax:	+46 18 132039
		S - 751 42 Uppsala, Sweden

bogatko@lzga.ATT.COM (George Bogatko) (08/16/90)

In article <27@astph.UUCP>, jeff@astph.UUCP (8592x2) writes:

> The first process to attach the shared memory segment will be returned
> a memory address that points to the shared memory block. 

Not quite.  What you get is an address that is MAPPED into your data-space,
not the physical location of the segment.  A minor point, but one that
should be understood.

> I need to know if additional attaches by other processes will be
> guaranteed to return the same address as that the first process
> was returned.

Not guaranteed, but if the attach is the *VERY FIRST THING* you do
in your client process, then the chances are high that the segment will
be mapped to the same number.  If you do other attaches to other
shared memory segments before the one in question, then the number
will be different.

It doesn't seem to matter (on 3B's at least) if you malloc first, or
have buffers either in data, bss, or stack.  The number that is
returned apparently is a base reserved for shared memory
attachments.  On 3B's, that number starts at 0xc1000000.
I tried an experiment, and saw that the first attach occured at 0xc1000000,
and the second occured at 0xc1020000 even though the segments were
1000 bytes each.  The box has a mind of it's own.

(On a 386, the attaches occured at 0x80400000, and 0x80800000).

> I am aware that you can request a particular address,
> but why bother communicating that information between the processes
> if the same address is returned anyway?

All the foregoing does not *guarantee* anything.  It is what *seems* to
happen on two particular boxes, given certain circumstances.

If you feel lucky, go ahead and assume that if you do it right and
consistantly, you will get the same number.  If you don't feel lucky,
the you have three options.

1.  Double indirected pointers, or array offset.
    Double indirected pointers is real nasty stuff to do and makes for
    real hard to read code, especially when casting is involved.  Use
    macros.  If you can afford array offset, it will be easier
    to maintain.  With the modern optimizers available nowadays, the
    savings obtained by pointer manipulation as opposed to array-offset
    may not be sufficient to justify the added maintainance difficulty.

2.  Put the location of the shared memory attachment returned from
    the initial shmat call somewhere generally available, and then have
    the clients use that number for the attach.  One suggestion seen on
    the net was to reserve the first [sizeof(char *)] at the front
    of the shared memory segment, and then do two attaches.  One to
    get that number, and the second (after a detach) to re-attach
    to the new number.  The difficulty here is that you still have
    to be careful when you do the second attach.  i.e before or after other
    attaches.  Experiment a lot with this one.

3.  Use the same hard-coded number for all attaches.  This is the simplest,
    least portable, and most offensive way of doing it.  Unfortunately, it
    is the one I've seen most often used.  Yuk.

Hope this helps

GB

volpe@underdog.crd.ge.com (Christopher R Volpe) (08/17/90)

In article <THOMAS.90Aug16123252@uplog.uplog.se>, thomas@uplog.se
(Thomas Tornblom) writes:
|>
|>If you must have the same addresses between the processes (which is nice for
|>pointers and stuff) I'd pick some high constant address, say 0x[48c]0000000
|>or so that isn't likely to map onto anything on the architectures
you're using.
|>

I was working on a project with a couple of people last summer where we
had to use shared memory segments and processes had to exchange pointers.
We decided is just wasn't worth it to take chances on a hardcoded address
that might fail on any particular run because the kernel couldn't attach
the segment at the address specified. So we just did all the pointer
exchanging in terms of offsets from the base address of the segment
and let each individual process convert between offsets and virtual 
addresses.  It's a little tedious at most (like when you're sharing
linked lists), but the added flexibility and reliability is worth
the effort, IMHO.
                                    
==================
Chris Volpe
G.E. Corporate R&D
volpecr@crd.ge.com

wht@n4hgf.Mt-Park.GA.US (Warren Tucker) (08/17/90)

In article <THOMAS.90Aug16123252@uplog.uplog.se> thomas@uplog.se (Thomas Tornblom) writes:
>In article <187@n4hgf.Mt-Park.GA.US> wht@n4hgf.Mt-Park.GA.US (Warren Tucker) writes:
>
>   In article <27@astph.UUCP> jeff@astph.UUCP (8592x2) writes:
>   To be sure, specify the attach address, regardless of what the FM says.
>This is not guaranteed to work. Typically the kernel allocates the addresses
>depending of the memory layout of the running process.
In all of the implementations I have used, the kernel performs this
optimzation.  And of course, it has been working me for 4 years.
 
-----------------------------------------------------------------------
Warren Tucker, TuckerWare   gatech!n4hgf!wht or wht@n4hgf.Mt-Park.GA.US
"Tell the moon; don't tell the March Hare: He is here. Do look around."

wht@n4hgf.Mt-Park.GA.US (Warren Tucker) (08/17/90)

In article <12638@hydra.gatech.EDU> gt0178a@prism.gatech.EDU (BURNS,JIM) writes:
>in article <187@n4hgf.Mt-Park.GA.US>, wht@n4hgf.Mt-Park.GA.US (Warren Tucker) says:
>< In article <27@astph.UUCP> jeff@astph.UUCP (8592x2) writes:
>< To be sure, specify the attach address, regardless of what the FM says.
>< Make a small program that passes 0 for the address and see what it
>< returns.  Then, use that value hardcoded, possibly #defined for each
>< arcitecture you plan to run the program on.
>
>What if yours is not the only application creating and deleting shared
>memory segments? Are you saying you always get the same address?

Yes, in a virtual system where the shmat'd address is virtual.
 
-----------------------------------------------------------------------
Warren Tucker, TuckerWare   gatech!n4hgf!wht or wht@n4hgf.Mt-Park.GA.US
"Tell the moon; don't tell the March Hare: He is here. Do look around."

quan@hpcc01.HP.COM (Suu Quan) (08/17/90)

/ hpcc01:comp.unix.wizards / jeff@astph.UUCP (8592x2) /  1:11 pm  Aug 15, 1990 /
>
>Question concerning the shared memory attach call:
>
>I need to know if additional attaches by other processes will be
>guaranteed to return the same address as that the first process
>was returned. I am aware that you can request a particular address,
>but why bother communicating that information between the processes
>if the same address is returned anyway? I would appreciate any
>answers or direction to documentation.

In spite of other positive answers, the real answer is NO.

The exactly same program, run on different kernels, will most probably
result in different attached address. The attached address depends on
too many different kernel parameters to discuss here in a few lines.

On the other hand, if you want to request a particular address, the down
side of it is that you don't know whether any other applications has used
that segment of address or not. This practice is definitely not recommended.

devil@techunix.BITNET (Gil Tene) (08/17/90)

In article <12636@hydra.gatech.EDU> gt0178a@prism.gatech.EDU (BURNS,JIM) writes:
>in article <27@astph.UUCP>, jeff@astph.UUCP (8592x2) says:
>>
>>
>> Question concerning the shared memory attach call:
>>
>> I am writing a shared memory allocation manager for a multi-user
>> database. This manager will allow several processes to be attached
>> to the same memory segment at the same time. The first process to
>> attach to the shared memory segment will be returned a memory address
>> that points to the shared memory block.
>>
>> I need to know if additional attaches by other processes will be
>> guaranteed to return the same address as that the first process
>> was returned. I am aware that you can request a particular address,
>> but why bother communicating that information between the processes
>> if the same address is returned anyway? I would appreciate any
>> answers or direction to documentation.
>
>I don't see why not. The shmget(2) routine specifies the memory block
>size. All the shmat(2) routine does is return a pointer to the beginning
>of that block (by default). The same block is returned to different
>processes if they use the same shmid returned by shmget(2). Adapted from
>the HP 9000/800 HP-UX Real Time Programmers Manual:
>

To start with : Don't risk it.

shmat() attaches to a shared memory block. Each process on a VIRTUAL
MEMORY system may have this block attached at a different VIRTUAL address.
It IS possible to specify a specific address to be used for the mapping,
but this is risky for two reasons : a) some other application may
be mapping the same address. b) some systems DO NOT ALLOW THIS OPTION.

I have had experience with this kind of problem. My experience on Sun-3
systems running SunOS 4.0.3 is that the shmat() DOES return the same
mapping address in all processes. When moving soem software to an
Hp 900/375 running HP-UX 7.0 some of the design flaw's surfaced :
HP-UX DOES NOT map to the same address, and this software had
shared POINTERS. This was easy to fix, as most pointers can be
replaced with offsets, and workarounds can be done for anything else.

In short : If you want portable sysV shared memory code, don't assume
shmat() returns the same address in all processes. You'll find out
FAST on any system that does not.

Gil.

--
--------------------------------------------------------------------
| Gil Tene                      "Some days it just doesn't pay     |
| devil@techunix.technion.ac.il   to go to sleep in the morning."  |
--------------------------------------------------------------------

gerry@jts.com (Gerry Roderick Singleton ) (08/18/90)

In article <THOMAS.90Aug16123252@uplog.uplog.se> thomas@uplog.se (Thomas Tornblom) writes:
>In article <187@n4hgf.Mt-Park.GA.US> wht@n4hgf.Mt-Park.GA.US (Warren Tucker) writes:
>
>   In article <27@astph.UUCP> jeff@astph.UUCP (8592x2) writes:
>   >
>   >Question concerning the shared memory attach call:
>   >
>   >I am writing a shared memory allocation manager for a multi-user
>   >database.
>   >I need to know if additional attaches by other processes will be
>   >guaranteed to return the same address as that the first process
>   >was returned.
>
>   To be sure, specify the attach address, regardless of what the FM says.
>   Make a small program that passes 0 for the address and see what it
>   returns.  Then, use that value hardcoded, possibly #defined for each
>   arcitecture you plan to run the program on.
>
>[example deleted]
>
>This is not guaranteed to work. Typically the kernel allocates the addresses
>depending of the memory layout of the running process.
>
>Our sysV.2 68k kernel uses the current end of bss rounded up with some
>constant as the lowest base for shm. It also checks that the segment doesn't
>overlap into the stack or other shared memory segments.
>
>If you must have the same addresses between the processes (which is nice for
>pointers and stuff) I'd pick some high constant address, say 0x[48c]0000000
>or so that isn't likely to map onto anything on the architectures you're using.
>
>Thomas

Since we're talking about attaching the same memory segment to multiple 
process, I want to add a gotcha to the discussion and ask for help on my
part of the problem.  The problem is simply that under SunOS 4.1 you can
not release a segment, using shmdt(2), that was attached by shmat(2) as
read/only.  I contacted my Sun people with the bug who told me to place a
service call.  I am in the process of doing this but would sure like any
solutions/work arounds that you wizards have on hand.

Here's what I sent Sun:
Subject: SunOS 4.1 / Shared Memory Bug
Status: OR

The following example demonstrates shmdt(2)'s inability to release an R/O 
segment.

Please e-mail your solutions and I will summarize to the list.

Symptom:  shmdt() always fails if the segment was attached read-only with
	shmat().

Example Program:

#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
  extern char *shmat();

#define PERMS 0777
#define SHMSIZE 10000

main()
{
    char *shared_data;
    int shmid;
    key_t key;
    int shmflag;
    int ret;
    
    key = getpid();
    
    shmid = shmget( key, SHMSIZE, IPC_CREAT | PERMS );
    if( shmid < 0 )
      exiterr( "error shmget ");
    
    printf( "shmid %d\n", shmid );

/* This is the culprit.  If SHM_RDONLY is left off it works. */
    shared_data = shmat( shmid, 0, PERMS | SHM_RDONLY );

    if( shared_data == NULL )
      exiterr( "error shmat");

    ret = shmdt( shared_data );

    if( ret < 0 )
      exiterr("error shmdt ");

    exit(0);
}

exiterr(s)
     char *s;
{
    
    fprintf( stderr, "%s\n",s );
    exit(99);
}


Regards,
ger
--
G. Roderick Singleton, System and Network Administrator, JTS Computers 
	{uunet | geac | torsqnt}!gerry@jtsv16.jts.com
Be careful of reading health books, you might die of a misprint.
		-- Mark Twain
-- 
--
G. Roderick Singleton, System and Network Administrator, JTS Computers 
	{uunet | geac | torsqnt}!gerry@jtsv16.jts.com

gt0178a@prism.gatech.EDU (BURNS,JIM) (08/19/90)

in article <9763@discus.technion.ac.il>, devil@techunix.BITNET (Gil Tene) says:
[I write:]
>>I don't see why not. The shmget(2) routine specifies the memory block
>>size. All the shmat(2) routine does is return a pointer to the beginning
>>of that block (by default). The same block is returned to different
>>processes if they use the same shmid returned by shmget(2). Adapted from
>>the HP 9000/800 HP-UX Real Time Programmers Manual:

> HP-UX DOES NOT map to the same address, and this software had
> shared POINTERS. This was easy to fix, as most pointers can be
> replaced with offsets, and workarounds can be done for anything else.

Perhaps I didn't make myself clear: While shmat() may or may not return
the same address to each process, it should be irrelevant. The purpose of
shmat() is to let the system provide you the base address so you don't
have to hardcode/guess/calculate the base yourself. I agree with the
numerous posters who say share offsets from this base, not the base
itself.

Probably the easiest way to do this is do typedef a structure, then assign
the return value of shmat() to a pointer to this structure. Then you can
use ptr->fieldname for everything of relevance, and #include these
definitions in each program. (Naturally, use the same compiler for each
program to avoid alignment problems with the structure. When the compiler
&/or the OS change, the alignment may change, but the relative offset
BETWEEN programs does not, so a program that writes to that field will
write to the same location as a program that reads that field. If this
didn't work, FORTRAN programmers wouldn't be able to put COMMON blocks in
shared memory, which I do.)
-- 
BURNS,JIM
Georgia Institute of Technology, Box 30178, Atlanta Georgia, 30332
uucp:	  ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!gt0178a
Internet: gt0178a@prism.gatech.edu

terryl@osf.osf.org (08/25/90)

In article <2010@lzga.ATT.COM> bogatko@lzga.ATT.COM (George Bogatko) writes:
>It doesn't seem to matter (on 3B's at least) if you malloc first, or
>have buffers either in data, bss, or stack.  The number that is
>returned apparently is a base reserved for shared memory
>attachments.  On 3B's, that number starts at 0xc1000000.
>I tried an experiment, and saw that the first attach occured at 0xc1000000,
>and the second occured at 0xc1020000 even though the segments were
>1000 bytes each.  The box has a mind of it's own.
>
>(On a 386, the attaches occured at 0x80400000, and 0x80800000).


     There might be a logical explanation for this behavior (about where
addresses get attached in shared memory segments).

     What could possibly be happening is that the DIFFERENCE between the
successive addresses could be the amount of memory that is capable of being
mapped with a PAGE of PTEs(Page Table Entries). If that is the case, then
not only are the processes sharing the same shared memory segment, but they
are also sharing the page of PTEs needed to map the segment.

     For example, using the numbers given for the 386 box above, one can see
that the difference is 0x400000 bytes, or 4 Mbytes of memory. This sure looks
suspiciously like a hardware page size of 4096 bytes, with each PTE taking up
4 bytes, providing 1024 PTEs per hardware page.

     The reason for doing this is to make bookkeeping of the shared memory
segment very simplistic; if it becomes necessary to move the shared memory
segment around in PHYSICAL memory, then all one has to do is modify the
page containing the PTEs for the segment, instead of tracking down each
process who has attached to the segment and then modifying the PTEs for each
individual process....

jeff@astph.UUCP (8592x2) (08/25/90)

I begun to code my initial shared memory manager after many
helpful tips from the net. We have decided to code for the
most portable usage of the shmat() system call. It is our goal
to run our database on different systems with little or no
modification. Thanks for your advice.

Our database will work with a server/client process architecture.
The server and client will all need access into this shared
memory. We plan to allow the system to select the address to
map each process into the shared memory. The FM describes how
to make the shmat() and allow the system to select the address.

With in this shared memory we will use many queues, free queues,
record queues, etc. We do not want to use absolute pointers
within these queues because the system may allign the shared memory to
a different address with each shmat(). Thus we will use offsets
from the beginning of the shared memory segment. The queue
structures follow.

    typedef	struct {
	unsigned	short	offset; /* segment index */
	unsigned	char	segment;/* segment identifier */
    } SQUEUE;	/* singly linked list queue pointer */

    typedef	struct {
	SQUEUE			fore;	/* fore pointer */
	SQUEUE			back;	/* back pointer */
    } DQUEUE;	/* doubly linked list queue pointer */

The above structures will provide a relative 'offset' into a shared
memory segment and an identifier to a particular memory 'segment'.
Thus we can create a large shared memory segment (1 meg max) and
provide enough pointers. However if we need more than one meg we
can create another shared memory segment and use the 'segment'
field to point to another memory segment.

Database programmers what do you think?

Thanks,			jeff martin
			astph!jeff@psuvax1.psu.edu
			psuvax1!astph!jeff
			philadelphia phillies