[comp.misc] Message based OSs

adams@littlei.UUCP (Robert Adams) (02/12/88)

From article <1447@sugar.UUCP>, by peter@sugar.UUCP (Peter da Silva):
> MINIX isn't terribly real. It's realler than GNU (after all, it's out :->),
> but it's got a small fraction of V7, and it's buggy. Comes from having
> everything handled by messages.

I would like an explanation of this.  It seems like it would be a
wonderful computer architecture discussion because many systems are
moving from procedural based interfaces to message based interfaces
and, if that tends to create code that's inherently buggy, we certainly
have a problem on our hands.

Have any comments on this, Peter?  Anyone else?

	-- Robert Adams
	...!littlei!adams

clay@oravax.UUCP (McFarland) (02/16/88)

In article <227@gandalf.littlei.UUCP> adams@littlei.UUCP (Robert Adams) writes:
>From article <1447@sugar.UUCP>, by peter@sugar.UUCP (Peter da Silva):
>> MINIX isn't terribly real. It's realler than GNU (after all, it's out :->),
>> but it's got a small fraction of V7, and it's buggy. Comes from having
>> everything handled by messages.
>
>I would like an explanation of this.  It seems like it would be a
>wonderful computer architecture discussion because many systems are
>moving from procedural based interfaces to message based interfaces
>and, if that tends to create code that's inherently buggy, we certainly
>have a problem on our hands.
>
>Have any comments on this, Peter?  Anyone else?
>
>	-- Robert Adams
>	...!littlei!adams

Theoretically, message-based architectures are inherently less buggy
than procedure-call architectures; one reason is that control-flow 
bugs are confined to the entity which receives a message. However,
you must really design a message-based system. If you produce a 
message-based system by holding a procedure-call system in your
hands, facing in the direction of Alan Kay, and whispering "message"
(thus magically changing "calls" to "sends") what you have is a kludge.
The kludge will contain all previous bugs + bugs in the message handler
+ bugs resulting from unexpected interactions between the message
structure and the existing procedure structure.

You get the same problems if you try to put a classical mainframe
architecture on a chip :-).
 


Clay Brooke-McFarland		    Odyssey Research Associates

--------------------------------------------------------------|
"We didn't expect much, and we got most of what we expected." |
							      |
				- Tom Steel		      |
--------------------------------------------------------------|

terry@wsccs.UUCP (terry) (02/25/88)

In article <227@gandalf.littlei.UUCP>, adams@littlei.UUCP (Robert Adams) writes:
> From article <1447@sugar.UUCP>, by peter@sugar.UUCP (Peter da Silva):
> 
> I would like an explanation of this.  It seems like it [message based os's]
> would be a
> wonderful computer architecture discussion because many systems are
> moving from procedural based interfaces to message based interfaces
> and, if that tends to create code that's inherently buggy, we certainly
> have a problem on our hands.

	I have a comment on this;  the one gripe I have with my Amiga is it's
message based OS... or rather, the library routine interface to it.  I don't
think that message based architectures are an inherently bad design, but if
you do not support the defacto standard library calls for portable code due
to difficulty of implimentation, you're going to have problems.

	In general, when one is porting low-level stuff to an Amiga, it seems
to be a one-way process.  In addition, the only operating system I have found
myself able to port _from_ with ease is VMS... another message-styled system.

	I think that in general, the problems with a message based OS are
threefold:

	1) Portability of existing code is severly restricted.
	2) Size of code increases dramatically.
	3) There is significant system degradation when such a system is
	   _forced_ to provide decent response time to a real-time request.

	Point 1 above may be ripe to be thrown out (I personally disagree
with this; I *like* portable code) due to the general inability of a prior
technology to predict future trends with accuracy.  Simply put, it is just
possible that current code may reflect architecture-dependent structures
which are inherently not portable to newer (presumedly better?) architectures
without a great deal of effort.  This points to acknowledged inadequacies in
current architecture (else why develope a newer, penalizing one?).  I, for one,
refuse to port 'normal use' software to the 64,000 processor GoodYear box.  In
addition, I have gigs of stuff I would prefer to have around, but can not
afford to spend time to port.

	Point 2 will probably be resolved _if_ new chip architectures take
steps to impliment primary functions of message passing in hardware, therby
reducing or eliminating additional code overhead normally incurred in an
implimentation of a message passing on current chips (say the 680x0).

	Point 3 will probably be resolved the same as point 2, but I have
grave reservations.  It is well known that current modifications, such as
the multi-processor support, in the 'new generation' VAX machines from DEC
has been specifically implimented to not 'clash' with their VMS operating
system.  This would lead one to believe that current VAX architecture is a
refection of DEC's 'making VMS run better' via hardware support for the OS.
From personal experience, I can tell you that a MicroVAX II running VMS is
unable to maintain a serial baud rate (no flow control) at speeds greater
than 2400 baud using Digital's SET HOST/DTE command (a supposedly 'optimized'
portion of VMS) out a standard DHV11 controller when ANY processor loading
occurs.  There are a number of work arounds, of course, but they all involve
'unoptimizing' something... device drivers, I/O calls, etc.  Serial I/O in
a multitasking environment suffers as well on the Amiga.  I have found the
message passing multi-tasking kernal to be unable to support a throughput
of greater than 2400 baud when the POSSIBILTY of other tasks exists.  Again,
non-optimization will speed this up.  Absolute optimization through either
obscene priority boosts or non-highlevel (read: non-portable) modification
of low-level drivers, again, returns the system to adequate performance, but
at a sacrafice of other capabilities.  If one grabs the whole machine (or runs
at a high priority and is written in assembly), one can go well in excess of
38400 baud.

If my examples seem limited, note that they are _examples_, not speculation,
and are taken from personal experience.

I think that unless there is a great deal of hardware support, we are not
going to see a great deal of message-based OS's in wide use for any type of
real-time or machine-portable operations.  A message-based OS semms to imply
(to me, at least) the ability to communicate, in real time.  This implication
is supported by the 'transputer' hardware architecture which allows message
passing not only within a processor, but to other processors.  Unless the
country suddenly goes hi-speed digital, I do not forsee a bright future in
anything but transaction processing or some other get-it-done-in-a-finite-
but-possibly-longer-than-optimum-period-of-time application.


| Terry Lambert           UUCP: ...!decvax!utah-cs!century!terry              |
| @ Century Software       or : ...utah-cs!uplherc!sp7040!obie!wsccs!terry    |
| SLC, Utah                                                                   |
|                   These opinions are not my companies, but if you find them |
|                   useful, send a $20.00 donation to Brisbane Australia...   |
| 'There are monkey boys in the facility.  Do not be alarmed; you are secure' |

mfr@camcon.uucp (Mike Richardson) (02/25/88)

In article <198@oravax.UUCP>, clay@oravax.UUCP (McFarland) writes:
>
> Theoretically, message-based architectures are inherently less buggy
> than procedure-call architectures; one reason is that control-flow
> bugs are confined to the entity which receives a message. However,
> you must really design a message-based system. If you produce a
> message-based system by holding a procedure-call system in your
> hands, facing in the direction of Alan Kay, and whispering "message"
> (thus magically changing "calls" to "sends") what you have is a kludge.
> The kludge will contain all previous bugs + bugs in the message handler
> + bugs resulting from unexpected interactions between the message
> structure and the existing procedure structure.

Unless the system is programmed in a non-procedural language, such as PROLOG,
then it is bound to have a procedure-call architecture; you call a procedure
to get something done for you, be it send a network message half way round
the globe, or to find the length of a string.

Are you referring to message-passing systems verses shared memory (or monitor
based) systems? Since these can be shown to be functionally equivalent (each
can be similuated using the other) I would have though that they are
bug-equivalent as well, theoretically at least. It seems to me that message
passing systems have two advantages, however:

(a) The tendancy is to wrap up logical groups of system functions (file
system, network interfaces) into separate processes, rather than to lump
them all into the kernal (viz-a-viz UNIX). It is much easier to replace
or augment such systems.

(b) Message passing systems map onto multiprocessor machines better. Use
of (physical) shared memory is all very well, but eventually you run out
of memory bandwidth, which forces a separate memory solution, hence messages.

These opinions are my own, and not necessarily those of my employer. Hope
this doesn't take so long to cross the Atlantic ......

peter@sugar.UUCP (Peter da Silva) (03/02/88)

In article <227@gandalf.littlei.UUCP>, adams@littlei.UUCP (Robert Adams) writes:
> From article <1447@sugar.UUCP>, by peter@sugar.UUCP (Peter da Silva):
> > MINIX isn't terribly real. It's realler than GNU (after all, it's out :->),
> > but it's got a small fraction of V7, and it's buggy. Comes from having
> > everything handled by messages.

This wasn't a fair statement. I should have said "it comes from having
everything handled by rendezvous". Minix messages are not queued, so
it's sort of a cross between a coroutine-based system like UNIX and
a message based system like AmigaDOS. It has lower overhead on a context
switch than UNIX, but more than AmigaDOS (this is partially due to
the comparative amounts of memory protection in the three systems).

I have been informed by a reliable source that the biggest problem is that
the TTY driver gets deadlocked by the disk driver or the file system.

I have had very good luck with a pure message based system: AmigaDOS. The
main thing you have to make sure of on a system like this is that context
switches be efficient, because they happen often.

For example, when you read a byte from a file, and the block is already in
memory. On UNIX you switch to supervisor mode, do some playing around in the
open file table and the in-core inode table, and copy a byte back to user
space. No context switches, no waits.

In AmigaDOS you send a message to the handler process associated with that
file, and receive a message back. Two context switches, and the system gets
to run around the list of active processes.

The only saving grace here for Amy is that a context switch costs about as much
as a system call in UNIX does. There are no MMU registers to poke around
with, and the next available task can be found at the head of the run queue.
In UNIX you have to save a lot more state, and UNIX generally looks through
all active processes to decide what to run next.

On the other hand UNIX has to force more context switches than AmigaDOS to
enforce fair use of the system, because otherwise programs would tend to
use up all their quanta.

Of course real state-of-the-art message based systems are generally running
over networks or in multiprocessor systems (pretty much the same sort of
problem here), so the cost for sending a message is frequently a LOT higher.
Just how much do RPCs cost?