[comp.unix.wizards] shared memory

jab@lll-crg.arpa (03/25/87)

> From: Andrew Siegel <abs@nbc1.uucp>
> Subject: Question about Sys V shared memory
> Date: 23 Mar 87 22:28:19 GMT
> To:       unix-wizards@brl-sem.arpa
> 
> The situation:
> 
> I have a large shared memory segment.  I would like to be able to
> have a dynamic data structure within that segment, i.e. set aside
> some memory within the shm segment for allocation, and have one
> or more pointers within the shm segment pointing into this memory
> pool.
> 

Great, but unless the SHM segment is attached to the same virtual
address in each process, you can't use pointers to describe
the free addresses, since they'll be at different virtual addresses
in each process.

Choices:
	1) Store the info on what's free as "byte offset into SHM segment".
	   Then if the SHM segment lives at different places, the addresses
	   each process use would be relative to the beginning of the SHM
	   segment, and each process would know where *THAT* was.
	2) Always locate the SHM segment in the same virtual address for all
	   processes, period.
The first choice is a lot of things, including more portable and less
cumbersome. Unfortunately, it takes more CPU operations to evaluate
	*(base + offset)
than if you know the real address itself.

You can see the problems with the second approach.

	Jeff Bowles
	Summit, NJ

rrw@cci632.UUCP (Rick Wessman) (11/21/88)

Does anyone know if non-swappable shared memory is supported by
System V? I need to know because some of our applications developers
want it. Our sister division in California supports it, but I cannot
find documentation for it in the SVID.

I would appreciate it if someone would point me in the right
direction.

rrw@cci632.UUCP (Rick Wessman) (11/21/88)

"Postnews" did not append my signature because it was too long. Here
it is:

rrw@cci632.UUCP (Rick Wessman) (11/21/88)

Maybe this time postnews will append the signature.

-- 
				Rick Wessman
				rochester!cci632!rrw
				uunet!ccicpg!cci632!rrw
				rlgvax!cci632!rrw

guy@auspex.UUCP (Guy Harris) (11/23/88)

>Does anyone know if non-swappable shared memory is supported by
>System V? I need to know because some of our applications developers
>want it. Our sister division in California supports it, but I cannot
>find documentation for it in the SVID.

It's not in the SVID, but some versions of System V have (privileged!)
SHM_LOCK and SHM_UNLOCK "shmctl" operations that lock shared memory
segments into memory.  There is otherwise no guarantee that shared
memory segments are non-swappable; some older implementations happen to
lock them in memory, but other ones (more recent ones, generally) do
not. 

tom@dvnspc1.UUCP (Tom Albrecht) (11/23/88)

In article <22739@cci632.UUCP>, rrw@cci632.UUCP (Rick Wessman) writes:
> Does anyone know if non-swappable shared memory is supported by
> System V? I need to know because some of our applications developers
> want it. Our sister division in California supports it, but I cannot
> find documentation for it in the SVID.
> 
> I would appreciate it if someone would point me in the right
> direction.

In V.3 with shmctl, you can specify the SHM_LOCK cmd which locks the shared
segment in memory.  Documentation says that it can only be executed by 
a process that has an effective user ID equal to super user.

SHM_UNLOCK is also available.

-- 
Tom Albrecht
Unisys			UUCP: {sdcrdcf|psuvax1}!burdvax!dvnspc1!tom

crossgl@ingr.UUCP (Gordon Cross) (11/24/88)

In article <22739@cci632.UUCP>, rrw@cci632.UUCP (Rick Wessman) writes:
> Does anyone know if non-swappable shared memory is supported by
> System V? I need to know because some of our applications developers
> want it. Our sister division in California supports it, but I cannot
> find documentation for it in the SVID.

Yes it is.  Try "shmctl (shmid, SHM_LOCK)" to lock a shared memory segment
in core.  You must be superuser to do this (for obvious reasons).


Gordon Cross
Intergraph Corp.  Huntsville, AL
...uunet!ingr!crossgl

SPETTRI%IFIIDG.BITNET@cunyvm.cuny.edu (Memcaraglia Francesco) (09/06/89)

 I have two shared memory segments I would like to throw away;
 I tried with ipcrm and I got the expected result, that is after
 calling ipcs the segments were listed as D..... ( that is to be
 destroyed after last attach) ; however, after reboot the two
 segments are again there; closer examination shows that there are
 attached processes . I do not understand how I can get rid of
 these segments ? Any suggestion ?

cpcahil@virtech.UUCP (Conor P. Cahill) (09/07/89)

In article <20787@adm.BRL.MIL>, SPETTRI%IFIIDG.BITNET@cunyvm.cuny.edu (Memcaraglia Francesco) writes:
>  I have two shared memory segments I would like to throw away;
>  I tried with ipcrm and I got the expected result, that is after
>  calling ipcs the segments were listed as D..... ( that is to be
>  destroyed after last attach) ; however, after reboot the two
>  segments are again there; closer examination shows that there are
>  attached processes . I do not understand how I can get rid of
>  these segments ? Any suggestion ?

The shared memory segments were not STILL there after the reboot, they
must have been recreated following the boot up.  Have you verified the 
creation times to see that they have not changed accross the reboot?
If what you are saying really does happen, you probably need to add some
additional information about your hardware and OS because this is not the
standard behavior.


I have worked on some system V implementations (Concurrent Computer's Xelos,
for example) that had a bug in that if the program does not explicitly 
detach from the shared memory segment, the attachment count might not be 
decremented  when the process exits.   When  this occured, the only way to 
get rid of the segment was to reboot.

-- 
+-----------------------------------------------------------------------+
| Conor P. Cahill     uunet!virtech!cpcahil      	703-430-9247	!
| Virtual Technologies Inc.,    P. O. Box 876,   Sterling, VA 22170     |
+-----------------------------------------------------------------------+

poser@csli.Stanford.EDU (Bill Poser) (12/11/89)

In attempting to use shared memory for large (hundreds of KB) 
objects, I have run into what seem to be nearly insuperable portability
problems. At first I was optimistic, as the System V shared memory
facilities seem to have spread to most versions of UNIX, without
even any differences in syntax. However, I have discovered that
two crucial parameters differ widely from system to system and
that there appears to be no way to change them other than
rebuilding the kernel, which is not always an option. The two
parameters are the maximum size of a shared memory segment and
the separation between the end of the program's data segment and
the virtual address at which shared memory segments are attached. 
This distance determines the maximum amount of ordinary
(non-shared) memory that a program can (s)brk. 

Am I correct in concluding that one simply cannot use shared memory
portably for large objects or if one may need to allocate large amounts
of ordinary memory dynamically?

jas@postgres.uucp (James Shankland) (12/12/89)

In article <11383@csli.Stanford.EDU> poser@csli.Stanford.EDU (Bill Poser) writes:
>In attempting to use shared memory for large (hundreds of KB) 
>objects, I have run into what seem to be nearly insuperable portability
>problems....  Two crucial parameters differ widely from system to system ...:
>the maximum size of a shared memory segment and the separation between the
>end of the program's data segment and the virtual address at which shared
>memory segments are attached....

UNIX shared memory support remains polymorphically perverse.  Not only
is the whole SysV shmem interface a botch, but the implementations are
flawed.  Your application should certainly be coded so that it can live
with different amounts of shared memory on different platforms, and
with different attachment addresses.  Also, remember that you can
override default segment placement in the shmat() call to get a bigger
sbrk() region.

Though I haven't used it, the SunOS4.x mmap interface seems like a much
more rational approach to shared memory.  Can anyone comment on its
usefulness and its future?  Also, are there incompatibilities between
Sun's mmap and that of other vendors (doesn't Sequent have an mmap
interface, too?), and the unimplemented mmap() described in the 4.2 docs?

jas

gil@banyan.UUCP (Gil Pilz@Eng@Banyan) (12/12/89)

In article <11383@csli.Stanford.EDU> poser@csli.Stanford.EDU (Bill Poser) writes:
>In attempting to use shared memory for large (hundreds of KB) 
>objects, I have run into what seem to be nearly insuperable portability
>problems. 

[stuff removed]

>rebuilding the kernel, which is not always an option. The two
>parameters are the maximum size of a shared memory segment and
>the separation between the end of the program's data segment and
>the virtual address at which shared memory segments are attached. 
>This distance determines the maximum amount of ordinary
>(non-shared) memory that a program can (s)brk. 

For every implementation of shmat(2) that I've seen you are allowed to
specify where in the virtual memory map you wish to attach the shared
region.  The parameter you speak of only serves as a _default_ in
cases where you don't really care and specify '0' as the attach
address (it might also serve as a lower limit to make sure you don't
screw yourself out of (s)brk space).

The other parameter you mention (maximum size of a shared region) can
usually only be re-configured if have the ability to rebuild the
kernel. If this is not an option you may be able to patch the sucker
but it's really flavor dependent at that point (the main questions
being (a) wether the routine that creates shared regions simply checks
against this upper limit or wether there are some pre-allocated
structures that depend on the upper limit being what it is, and (b)
wether or not the rest of the virtual memory sub-system can cope with
changes to this size).

"I've got two hundred miles of gray asphalt and lights before I sleep
 and there'll be no warm sheets or welcoming arms to fall into tonight
 I've got two hundred miles of gray asphalt and lights before I sleep
 but I wouldn't trade all your golden tomorrows for one hour of this night"
	- cowboy junkies

Gilbert W. Pilz Jr.       gil@banyan.com

cpcahil@virtech.uucp (Conor P. Cahill) (12/12/89)

In article <11383@csli.Stanford.EDU>, poser@csli.Stanford.EDU (Bill Poser) writes:
> In attempting to use shared memory for large (hundreds of KB) 
> objects, I have run into what seem to be nearly insuperable portability
> problems. At first I was optimistic, as the System V shared memory
> facilities seem to have spread to most versions of UNIX, without
> even any differences in syntax. However, I have discovered that
> two crucial parameters differ widely from system to system and
> that there appears to be no way to change them other than
> rebuilding the kernel, which is not always an option. The two

Modifying the SHMMAX should only require a kernel re-configuration which
should always be an option.

> parameters are the maximum size of a shared memory segment and
> the separation between the end of the program's data segment and
> the virtual address at which shared memory segments are attached. 

This would require a kernel rebuild, however if you design you're software
correctly, there won't be any problems.

An easy mechanism to handle this problem is to do the following:

	get the current sbrk value;
	create the shared memory segment
	attach the segment at the default address.

	if this address is too close to the sbrk value
		detach the segment
		attatch the segment at sbrk+min_size_that_you_need

If you don't know how much sbrk room to leave, just pick a big number. The 
only detriment will be that your process will take up a few extra page 
table entries.

> Am I correct in concluding that one simply cannot use shared memory
> portably for large objects or if one may need to allocate large amounts
> of ordinary memory dynamically?

No, you just need to adjust things a bit.

-- 
+-----------------------------------------------------------------------+
| Conor P. Cahill     uunet!virtech!cpcahil      	703-430-9247	!
| Virtual Technologies Inc.,    P. O. Box 876,   Sterling, VA 22170     |
+-----------------------------------------------------------------------+

chris@mimsy.umd.edu (Chris Torek) (12/12/89)

In article <1989Dec12.005555.20618@virtech.uucp> cpcahil@virtech.uucp
(Conor P. Cahill) writes:
>Modifying the SHMMAX should only require a kernel re-configuration which
>should always be an option.

As I understand it---which is not to say that it is so, for I have
never seen the SysRel% 1, 2, or 3# code itself---the total amount of
shared memory allowed per-system is reserved at boot time, is not
pageable, and is effectively taken away from the rest of the system.
For processes not using it, it is as if some of the machine's memory
had been physically removed.

This would mitigate against raising SHMMAX arbitrarily....

(No doubt someone will follow up if my understanding is incorrect.)
-----
% `SysRel': a replacement for `System V Release', since the `V' and
  `Release' are all redundant.  Thus, the question is not `is your
  system a System V style system' but rather `is it a SysRel 1
  system', etc.
# SysRel 4 has a much better shared memory system, now that they have
  tacitly acknowledged that not all Berkeley's proposals are bad.  (NB:
  mmap => BSD proposal, Sun implementation.)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@cs.umd.edu	Path:	uunet!mimsy!chris

gwyn@smoke.BRL.MIL (Doug Gwyn) (12/12/89)

In article <20552@pasteur.Berkeley.EDU> jas@postgres.berkeley.edu (James Shankland) writes:
>Though I haven't used it, the SunOS4.x mmap interface seems like a much
>more rational approach to shared memory.  Can anyone comment on its
>usefulness and its future?  Also, are there incompatibilities between
>Sun's mmap and that of other vendors (doesn't Sequent have an mmap
>interface, too?), and the unimplemented mmap() described in the 4.2 docs?

UNIX System V Release 4 provides something like the SunOS approach.
(I don't know how compatible the various mmap() implementations are,
but I can certainly imagine application portability problems similar
to the ones that started this thread.)

Note that not all environments can provide reasonable shared-memory
facilities.  Typically real crunchers have only primitive memory
management hardware.

cpcahil@virtech.uucp (Conor P. Cahill) (12/12/89)

In article <21223@mimsy.umd.edu>, chris@mimsy.umd.edu (Chris Torek) writes:
> In article <1989Dec12.005555.20618@virtech.uucp> cpcahil@virtech.uucp
> (Conor P. Cahill) writes:
> >Modifying the SHMMAX should only require a kernel re-configuration which
> >should always be an option.
> 
> As I understand it---which is not to say that it is so, for I have
> never seen the SysRel% 1, 2, or 3# code itself---the total amount of
> shared memory allowed per-system is reserved at boot time, is not
> pageable, and is effectively taken away from the rest of the system.
> For processes not using it, it is as if some of the machine's memory
> had been physically removed.
> 
> This would mitigate against raising SHMMAX arbitrarily....

SHMMAX is the maximum size of a single segment, not the total amount
of shared memory system wide, so raising it should not matter.

I don't believe that shared memory is reserverd at boot time because
I worked with a project that implemented shared libraries using shared 
memory segments and we maxed out all shared memory configuration 
options without it having a detrimental effect on memory available in
the system.  Another issue is that the pagability of shared memory is
controlled by a shmctl() call to lock/unlock a segment in memory.

In a quick test, I am able to create 15 meg of shared memory segments
on a system that has only 12 meg of memory, so there cannot be a boot
time reservation of the memory for shared memory. (At least this is
so under System V/386 Rel 3.2.  I believe this is true for most
implementations).
-- 
+-----------------------------------------------------------------------+
| Conor P. Cahill     uunet!virtech!cpcahil      	703-430-9247	!
| Virtual Technologies Inc.,    P. O. Box 876,   Sterling, VA 22170     |
+-----------------------------------------------------------------------+

janm@eliot.UUCP (Jan Morales) (12/13/89)

In article <11383@csli.Stanford.EDU> Bill Poser writes:
>Am I correct in concluding that one simply cannot use shared memory
>portably for large objects or if one may need to allocate large amounts
>of ordinary memory dynamically?

What I have found is that most kernels select some seemingly arbitrary
address between the top of the heap and the bottom of the stack when
attaching a shared memory segment when no address is supplied in the
`shmat' call.  In one case, we ran into the same problem you have
because the shared memory segment was being attached a mere 32K above
the top of the heap.  Since our program malloced more than 32K after
attaching the shared memory segment, the program behaved as if it had
run out of memory because `malloc' (or `sbrk') would bump into the
bottom of the segment.

Our solution was to use the `getrlimit' system call to find out the
maximum address the heap might reach (the highest possible break),
subtract the size of the desired shared memory segment from it, and have
the `shmat' call attach at the resulting address.  On the system in
question, `shmat' was supposed to take the address provided and round it
down to the next page.  This solved the problem on that particular
platform.  I'm sure this is not a universal solution.

Jan
-- 
{uunet,pyramid}!pyrdc!eliot!janm

boyd@necisa.ho.necisa.oz (Boyd Roberts) (12/13/89)

In article <11383@csli.Stanford.EDU> poser@csli.Stanford.EDU (Bill Poser) writes:
>
>Am I correct in concluding that one simply cannot use shared memory
>portably for large objects or if one may need to allocate large amounts
>of ordinary memory dynamically?

Yes.  I think it's widely acknowledged that the Sys V inter-process
communication hooks are real crocks, particularly shared memory.

Any given shared memory implementation has major problems when it
comes to portability due to the reliance on the underlying architecture
of the machine.  The best you can hope for is a statement that clearly
defines what shared memory services are _guaranteed_.  That way you
can be fairly certain that your code will port well.  But, System V
gives no such assurances.

Anyway, using shared memory is a grody hack.  As someone one said:

    ``Don't diddle the code.  Choose a better algorithm.''


Boyd Roberts			boyd@necisa.ho.necisa.oz.au

``I've got reality backed up on this magtape -- in tar format''

jas@postgres.uucp (James Shankland) (12/14/89)

In article <1210@necisa.ho.necisa.oz> boyd@necisa.ho.necisa.oz (Boyd Roberts) writes:
>Anyway, using shared memory is a grody hack....

This is, to say the least, a highly questionable statement.

jas

hutch@lzaz.ATT.COM (Bob Hutchison) (12/14/89)

From article <21223@mimsy.umd.edu>, by chris@mimsy.umd.edu (Chris Torek):
- In article <1989Dec12.005555.20618@virtech.uucp> cpcahil@virtech.uucp
- (Conor P. Cahill) writes:
->Modifying the SHMMAX should only require a kernel re-configuration which
->should always be an option.
- 
- As I understand it---which is not to say that it is so, for I have
- never seen the SysRel% 1, 2, or 3# code itself---the total amount of
- shared memory allowed per-system is reserved at boot time, is not
- pageable, and is effectively taken away from the rest of the system.
- For processes not using it, it is as if some of the machine's memory
- had been physically removed.
- 
- This would mitigate against raising SHMMAX arbitrarily....
- 
- (No doubt someone will follow up if my understanding is incorrect.)

[ stuff deleted ]

- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
- Domain:	chris@cs.umd.edu	Path:	uunet!mimsy!chris

I've looked at this source code for at least SVR3.x and the shared
memory creation routines allocate regions just as exec(2) allocates
regions.  Memory is not reserved at boot time; in fact it isn't even
allocated when the shared memory segment is created - its pages are
marked "demand zero" and are allocated as page faults indicate that
the pages are being referenced.  I guess this has changed in SVR4.0.

BTW, the last time I looked, shared memory was not in the parts of the
SVID where message queues and semaphores were.  It was in a section
called "optional stuff" or something like that.  Since shared memory
(and especially limits and config info) is closely tied to the memory
management architecture for the machine, it can't be made portable too
easily.

And, to quote a famous computer scientist...     8v)

- (No doubt someone will follow up if my understanding is incorrect.)

Robert Hutchison
att!lzaz!hutch

stan@squazmo.solbourne.com (Stan Hanks) (12/14/89)

In article <1989Dec12.053453.497@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
>In article <21223@mimsy.umd.edu>, chris@mimsy.umd.edu (Chris Torek) writes:
>> In article <1989Dec12.005555.20618@virtech.uucp> cpcahil@virtech.uucp
>> (Conor P. Cahill) writes:
>> As I understand it---which is not to say that it is so, for I have
>> never seen the SysRel% 1, 2, or 3# code itself---the total amount of
>> shared memory allowed per-system is reserved at boot time, is not
>> pageable, and is effectively taken away from the rest of the system.
>I don't believe that shared memory is reserverd at boot time because
>I worked with a project that implemented shared libraries using shared 
>memory segments and we maxed out all shared memory configuration 
>options without it having a detrimental effect on memory available in
>the system.  Another issue is that the pagability of shared memory is
>controlled by a shmctl() call to lock/unlock a segment in memory.

Early System V (i.e. before they got around to having to number the
releases) had the shared memory stuff configured at boot time. The kernel
sort of malloc'd a chunk of memory that was as big as the total of all
the shared memory you could have and mapped your accesses into it
when you shmget'd (shmgot?) it.

Remember though that this was VAX hardware, and that System V didn't
even have the remotest notion of demand paged virtual memory. Hell, they
even just mapped the UNIBUS once at boot time, rather than supporting
dynamic remapping to make DMA device support more rational.

When the BRL 'System-V-under-BSD' emulation code hit the street, people 
had to re-think things. Simple subsumptive reasoning indicated that the 
clear win was to jam as much of System V as possible into BSD rather than 
the opposite. And so it went. The SunOS implemention (well, at least the 
first one I saw source to -- Guy Harris, where are you on this?) used 
dynamically allocated and mapped memory pages. The only thing that gets
allocated at boot time is an array of SHMMNI struct shmid_ds's for use
as the descriptors.

You wanted to create a segment, OK, fine. Here it is. When you free the 
segment, the pages went back to being usable as any other page. Looks like
the original interface. Works *MOSTLY* like the original interface -- unless 
you count on those pages being tacked down somewhere in kernel memory. 

I'm not aware of any "current" implementations that are done the "old"
way (i.e. permanently mapped kernel memory) but that doesn't mean that
they're not out there. I've scrupulously avoided everything with a *86
in it, ditto all vanilla System V boxes for MANY years, so I'm likely
to be missing something.

Still, I'd think that there are some real dangers of developing code
that couldn't deal with smaller than expected segment sizes or limits on
the number of segments that you can have or the total space that can be
occupied by shared memory. You can either do what Oracle does, and include
instructions on how to modify the kernel to increase these values; or 
tell the customer to suck rocks; or write adaptible code that determines
what resources it has available and makes the most of them.

Regards,

Stanley P. Hanks   Science Advisor                    Solbourne Computer, Inc.
Phone:             Corporate: (303) 772-3400           Houston: (713) 964-6705
E-mail:            ...!{boulder,sun,uunet}!stan!stan        stan@solbourne.com 

-- 
Stanley P. Hanks   Science Advisor                    Solbourne Computer, Inc.
Phone:             Corporate: (303) 772-3400           Houston: (713) 964-6705
E-mail:            ...!{boulder,sun,uunet}!stan!stan        stan@solbourne.com 

df@phx.mcd.mot.com (Dale Farnsworth) (12/14/89)

Chris Torek (chris@mimsy.umd.edu) writes:
> 
> As I understand it---which is not to say that it is so, for I have
> never seen the SysRel% 1, 2, or 3# code itself---the total amount of
> shared memory allowed per-system is reserved at boot time, is not
> pageable, and is effectively taken away from the rest of the system.
> For processes not using it, it is as if some of the machine's memory
> had been physically removed.
> 
> This would mitigate against raising SHMMAX arbitrarily....

I'm glad you started with a disclaimer.  I've seen several System V shared
memory implementations, and *none* of them reserve shared memory at boot
time.  If the system supports paging, the shared memory is paged as well.

-Dale

-- 
Dale Farnsworth

jje@virtech.uucp (Jeremy J. Epstein) (12/16/89)

In article <1989Dec12.005555.20618@virtech.uucp>, cpcahil@virtech.uucp (Conor P. Cahill) writes:
> In article <11383@csli.Stanford.EDU>, poser@csli.Stanford.EDU (Bill Poser) writes:
> > [stuff deleted]  However, I have discovered that
> > two crucial parameters differ widely from system to system and
> > that there appears to be no way to change them other than
> > rebuilding the kernel, which is not always an option.
> 
> Modifying the SHMMAX should only require a kernel re-configuration which
> should always be an option.
Not true, Conor...some systems don't have C compilers, linkers, etc
which are needed to do reconfiguration.  Many XENIX systems are that way.
 
> An easy mechanism to handle this problem [leaving enough room for the
> sbrk] is to do the following:
> 
> 	get the current sbrk value;
> 	create the shared memory segment
> 	attach the segment at the default address.
> 
> 	if this address is too close to the sbrk value
> 		detach the segment
> 		attatch the segment at sbrk+min_size_that_you_need
Unfortunately this doesn't work on some machines.  For example, on
HP RISC machines (the 9000/8xx systems), the address you get attached
at in fact identifies the segment.  Thus, no two shared memory segments on
a system will have the same address.  This prevents you from ever
attaching at an address other than the default.  I complained about
this a few years ago, and was pointed to the place in the manual
(presumably on the shmat(2) page) where it warned of this "feature".

I agree with Bill Poser: there is no uniformity to this feature, or
how to use it.

Jeremy Epstein
TRW Systems Division
uunet!virtech!jje
-- 
Jeremy Epstein
TRW Systems Division
2750 Prosperity Avenue
FV10/5010

cpcahil@virtech.uucp (Conor P. Cahill) (12/16/89)

In article <1989Dec15.221201.1003@virtech.uucp>, jje@virtech.uucp (Jeremy J. Epstein) writes:
> In article <1989Dec12.005555.20618@virtech.uucp>, cpcahil@virtech.uucp (Conor P. Cahill) writes:
> > In article <11383@csli.Stanford.EDU>, poser@csli.Stanford.EDU (Bill Poser) writes:
> > > [stuff deleted]  However, I have discovered that
> > > two crucial parameters differ widely from system to system and
> > > that there appears to be no way to change them other than
> > > rebuilding the kernel, which is not always an option.
> > 
> > Modifying the SHMMAX should only require a kernel re-configuration which
> > should always be an option.
> Not true, Conor...some systems don't have C compilers, linkers, etc
> which are needed to do reconfiguration.  Many XENIX systems are that way.

These days, most vendors include a mechanism which allows a re-configure
without having to have the Development system.  I even worked on a system
where they delivered a shell program that used adb to patch the appropriate
variables in the kernel for configuration purposes because there was no other
way to do it without the development system.

> > An easy mechanism to handle this problem [leaving enough room for the
> > sbrk] is to do the following:
> > 
> > 	get the current sbrk value;
> > 	create the shared memory segment
> > 	attach the segment at the default address.
> > 
> > 	if this address is too close to the sbrk value
> > 		detach the segment
> > 		attatch the segment at sbrk+min_size_that_you_need
> Unfortunately this doesn't work on some machines.  For example, on
> HP RISC machines (the 9000/8xx systems), the address you get attached
> at in fact identifies the segment.  Thus, no two shared memory segments on
> a system will have the same address.  This prevents you from ever
> attaching at an address other than the default.  I complained about
> this a few years ago, and was pointed to the place in the manual
> (presumably on the shmat(2) page) where it warned of this "feature".
		     ^^^^^^^^  You probably mean shmop(2)
> 
> I agree with Bill Poser: there is no uniformity to this feature, or
> how to use it.

I also agree that there is no uniformity, but I think there is a solution 
for most shared memory implementations.  If what you say is true for the
HP machines (of course I have no reason to doubt it) I would say that HP
must ensure that the default attach location is far enough away from 
the malloc region so that moving it would not be necessary or they have a 
broken implementation.


-- 
+-----------------------------------------------------------------------+
| Conor P. Cahill     uunet!virtech!cpcahil      	703-430-9247	!
| Virtual Technologies Inc.,    P. O. Box 876,   Sterling, VA 22170     |
+-----------------------------------------------------------------------+

guy@auspex.UUCP (Guy Harris) (12/19/89)

>When the BRL 'System-V-under-BSD' emulation code hit the street, people 
>had to re-think things. Simple subsumptive reasoning indicated that the 
>clear win was to jam as much of System V as possible into BSD rather than 
>the opposite. And so it went. The SunOS implemention (well, at least the 
>first one I saw source to -- Guy Harris, where are you on this?) used 
>dynamically allocated and mapped memory pages. The only thing that gets
>allocated at boot time is an array of SHMMNI struct shmid_ds's for use
>as the descriptors.

If you're talking about SunOS 3.x, it didn't allocate shared memory at
boot time as I remember, but it *did* allocate it as wired-down physical
memory, not pageable memory.  SunOS 4.x pages shared memory segments; it
has a VM implementation that bears little, if any, resemblance to BSD's
(but does bear more than a little resemblance to S5R4's, given that
S5R4's is derived from SunOS 4.x's). 

Non-paging System V releases from AT&T allocated it as wired-down
physical memory; paging releases, as I remember (including "System V
Release 2 Version 2" or whatever the paging VAX release was called),
page shared memory.  That didn't stem from jamming S5 into BSD, since
what jamming occurred there was in the other direction....