[comp.unix.wizards] Here's the flame everyone's asking for

ford@kenobi.UUCP (Mike Ditto) (02/29/88)

Posting-Front-End: GNU Emacs 18.41.10 of Fri Oct  2 1987 on kenobi (usg-unix-v)


In article <2009@ho95e.ATT.COM> wcs@ho95e.ATT.COM (Bill.Stewart) writes:

> Ok, I'll flame!  What's wrong with System V shared memory?

Hmm... asking for trouble...  :-)

Actually, I like System V shared memory.  It has many useful features
and I have used it very successfully in several projects.  However, I
do have a few observations about System V IPC in general.

On Berkeley Unix, the primary IPC mechanism (the socket) is very
nicely implemented in a way consistent with the previously existing
I/O facilities.  In particular, it is accessed in the same way as
files and other I/O: with a "file" descriptor.  In fact, the socket
completely replaces the less general pipe mechanism.  A socket
descriptor can be accessed with the "read" and "write" system calls
(although socket-specific calls are also available).  On any
descriptor (file, device, or socket) the fstat() system call can be
used to determine what type it is, but few programs need to know.

With System V IPC (Shared Memory, Semaphores, and Message Queues)
special system calls are needed not only to create the "ID"s, but also
to access them.  These special access methods are necessary, of
course, but why not allow the normal access methods to work as well?
Why can't you read(2) and write(2) to message queues?  Why can't you
have a named semaphore or shared memory segment.  Why can't you use
fcntl(fd, F_SETFD, arg) to specify whether shared memory should be
inherited by exec(2)'d processes.

If System V IPC had been done "right":

	"/dev/kmem" could be a named shared memory segment, which,
	like all shared memory segments, could be accessed via
	lseek/read/write or mapped into a process's address space.

	IPC objects could have names in the filesystem, and be
	manipulated with normal commands.  You could use "rm" to
	delete a message queue, or "ls" to see which ones exist,
	just like you can see which devices are in /dev.

	You could use these names as arguments to programs, or put them
	in the environment.  For example, consider a multi-user
	conferencing system (like Compuserve "CB") that looked at the
	"CONFCHANNEL" environment variable for a name of a default
	shared memory segment to communicate through.

	The shell could use normal I/O redirection to connect programs
	via IPC.

	Shell scripts could easily use IPC.

	And so on...

Not all the IPC functions are directly mappable to read, write, etc.,
(what should reading from a semaphore do?) but it still wouldn't hurt
to give them file descriptors for the reasons above.  It's not any
different than having a line printer device which does nothing useful
in reply to a read() system call.

All the existing capability could have been provided, while giving a
more consistent view of the IPC mechanisms.  BSD Unix allows normal
read/write access to sockets, but provides additional system calls
that allow more detailed and socket specific control over I/O.  All
the old articles about Unix from Bell Labs in the seventies boasted
about the revolutionary idea of I/O and pipes that look the same as
file access.  And yet AT&T didn't live up to that concept in their IPC
enhancements.

From a practical point of view, it doesn't make much difference.  System
V IPC provides sufficiently powerful facilities to be very useful and
not too difficult to use, (once you are familiar with it, which won't
happen from reading the documentation).  I just think it could have been
made more consistent with the Unix philosophy without any loss of
functionality, and it would have opened up some interesting
possibilities like the examples above.

					-=] Ford [=-

"Well, he didn't know what to do, so	(In Real Life:  Mike Ditto)
he decided to look at the government,	ford%kenobi@crash.CTS.COM
to see what they did, and scale it	...!sdcsvax!crash!kenobi!ford
down and run his life that way." -- Laurie Anderson

jgm@K.GP.CS.CMU.EDU (John Myers) (03/01/88)

In article <43@kenobi.UUCP> ford@kenobi.UUCP (Mike Ditto) writes:
>In article <2009@ho95e.ATT.COM> wcs@ho95e.ATT.COM (Bill.Stewart) writes:
[ Justified Missed'em V flaming ]
>On Berkeley Unix, the primary IPC mechanism (the socket) is very
>nicely implemented in a way consistent with the previously existing
>I/O facilities.  In particular, it is accessed in the same way as
>files and other I/O: with a "file" descriptor.

Then why the heck can't you open(2) a BSD unix domain socket?  The
semantics seem pretty obvious. (Create a new socket and connect to
the socket named in the open call.)  Sounds like <10 lines of code to
me.

Something that would be harder, but would still be incredibly useful
would be to automaticly unlink a socket when the (last) process owning
that socket exits.

-- 
John G. Myers				John.Myers@k.gp.cs.cmu.edu

ford@kenobi.UUCP (Mike Ditto) (03/03/88)

Posting-Front-End: GNU Emacs 18.41.10 of Fri Oct  2 1987 on kenobi (usg-unix-v)


In article <997@PT.CS.CMU.EDU> jgm@K.GP.CS.CMU.EDU (John Myers) writes:

> In article <43@kenobi.UUCP> ford@kenobi.UUCP (Mike Ditto) writes:
> >In article <2009@ho95e.ATT.COM> wcs@ho95e.ATT.COM (Bill.Stewart) writes:
> [ Justified Missed'em V flaming ]
> >On Berkeley Unix, the primary IPC mechanism (the socket) is very
> >nicely implemented in a way consistent with the previously existing
> >I/O facilities.  In particular, it is accessed in the same way as
> >files and other I/O: with a "file" descriptor.
>
> Then why the heck can't you open(2) a BSD unix domain socket?  The
> semantics seem pretty obvious. (Create a new socket and connect to
> the socket named in the open call.)  Sounds like <10 lines of code to
> me.

The main reason that I see is that a Unix domain socket is not really
supposed to show up in the filesystem, and it supposedly doesn't in
more recent BSD releases (4.3?).  I don't think it has ever been clear
whether the "Unix domain" of socket names(addresses) is really
supposed to map into pathnames in the ("open"able) filesystem.  Is it
possible to bind an AF_UNIX socket to "/foo/bar/baz" if there is no
directory "/foo"?  I assume this won't work on 4.2, since it can't
create the "named socket".  But on 4.3 I don't know why it wouldn't
work.  In other words, the "name" that an AF_UNIX socket is bound to
does not need to have any relation to the file system.  You could
probably bind a socket to "/////////".  (I don't know, I haven't been
on a BSD system in quite a while).

> Something that would be harder, but would still be incredibly useful
> would be to automaticly unlink a socket when the (last) process owning
> that socket exits.

That would be inconsistent with files, which are not unlinked under those
circumstances.  Either the socket should "really" have a name in the
file system (and be openable, etc.) or it's address should have nothing
to do with the existence or non-existence of a file by the same name.
Both kinds of sockets could be useful.

					-=] Ford [=-

"Well, he didn't know what to do, so	(In Real Life:  Mike Ditto)
he decided to look at the government,	ford%kenobi@crash.CTS.COM
to see what they did, and scale it	...!sdcsvax!crash!kenobi!ford
down and run his life that way." -- Laurie Anderson

david@dhw68k.cts.com (David H. Wolfskill) (03/03/88)

Some musings from a non-wizard (meaning me, not Mike Ditto):

In article <43@kenobi.UUCP> ford@kenobi.UUCP (Mike Ditto) writes:
]....
]Actually, I like System V shared memory.  It has many useful features
]and I have used it very successfully in several projects.  However, I
]do have a few observations about System V IPC in general.

]On Berkeley Unix, the primary IPC mechanism (the socket) is very
]nicely implemented in a way consistent with the previously existing
]I/O facilities.  In particular, it is accessed in the same way as
]files and other I/O: with a "file" descriptor.  In fact, the socket
]completely replaces the less general pipe mechanism.  A socket
]descriptor can be accessed with the "read" and "write" system calls
](although socket-specific calls are also available).  On any
]descriptor (file, device, or socket) the fstat() system call can be
]used to determine what type it is, but few programs need to know.

]With System V IPC (Shared Memory, Semaphores, and Message Queues)
]special system calls are needed not only to create the "ID"s, but also
]to access them.  These special access methods are necessary, of
]course, but why not allow the normal access methods to work as well?
]Why can't you read(2) and write(2) to message queues?  Why can't you
]have a named semaphore or shared memory segment.  Why can't you use
]fcntl(fd, F_SETFD, arg) to specify whether shared memory should be
]inherited by exec(2)'d processes.

]If System V IPC had been done "right":
[several suggestions about how programs could (for example) access IPC
objects as if they had properties like those of files....]

]All the existing capability could have been provided, while giving a
]more consistent view of the IPC mechanisms.  BSD Unix allows normal
]read/write access to sockets, but provides additional system calls
]that allow more detailed and socket specific control over I/O.  All
]the old articles about Unix from Bell Labs in the seventies boasted
]about the revolutionary idea of I/O and pipes that look the same as
]file access.  And yet AT&T didn't live up to that concept in their IPC
]enhancements.

]....

Would it be feasible for some (future) implementation of some sort of
UNIX (or POSIX or GNU or...) to possess system calls with the
capabilities of System V IPC, and also be able to be accessed as Mike
suggests?

After all -- assuming (!) that this is actually something that this is
doable -- this would be making a superset of the System V functionality.
(I'm thinking here about the analogy of pipes being implemented in BSD
via sockets, for example.)

Now, whether or not it would be *worth* the effort is another issue --
possibly with a different answer....  (Probably with different answers
for different situations, for that matter....)

My (admittedly limited) experience with UNIX is with a flavor that is
System V with a fair amount of BSD extension, including sockets.

Naturally, I have no access to source for either BSD socket or System V
IPC kernel support, so I'm not in a position to judge whether or not any
of this really would be feasible....

Does this seem reasonable?

david
-- 
David H. Wolfskill
uucp: ...{trwrb,hplabs}!felix!dhw68k!david	InterNet: david@dhw68k.cts.com

allbery@ncoast.UUCP (Brandon Allbery) (03/16/88)

As quoted from <47@kenobi.UUCP> by ford@kenobi.UUCP (Mike Ditto):
+---------------
| In article <997@PT.CS.CMU.EDU> jgm@5555tK.GP.CS.CMU.EDU (John Myers) writes:
| > In article <43@kenobi.UUCP> ford@kenobi.UUCP (Mike Ditto) writes:
| > Then why the heck can't you open(2) a BSD unix domain socket?  The
| > semantics seem pretty obvious. (Create a new socket and connect to
| > the socket named in the open call.)  Sounds like <10 lines of code to
| > me.
| 
| The main reason that I see is that a Unix domain socket is not really
| supposed to show up in the filesystem, and it supposedly doesn't in
+---------------

Ah, but this is *exactly* the same as System V IPC!  Make up your minds:
why is a BSD socket not supposed to be in the file namespace, but System V IPC
(in particular, message queues and semaphores) is flamed for it?  (I will not
yet concede the point with shared memory:  the Sequent method sounds best to
me, but then why does only Sequent use it?)
-- 
	      Brandon S. Allbery, moderator of comp.sources.misc
       {well!hoptoad,uunet!hnsurg3,cbosgd,sun!mandrill}!ncoast!allbery