[comp.arch] Was: Memory utilization & inter-process contention

slackey@bbn.com (Stan Lackey) (09/08/89)

In article <2108@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>In article <45344@bbn.COM>, slackey@bbn.com (Stan Lackey) writes:
>> Because UNIX is a general purpose tool, ie it was made to
>> run on anything, it can't be as good on a specific box as an OS that
>> was written and tuned to run on that specific box and nowhere else.

>Agreed. But why can't the hardware be tuned to run UNIX? Why build
>interesting hardware first, and then almost as an afterthought put
>an OS on it that is going to give users a porting nightmare?

I just knew you were going to say that!  Actually, it looks to me like
a lot of the newer RISC's are being made for C and UNIX.  Most of what
I have seen of decision criteria consists of analyzing the performance
of Unix utilities and stuff like that.

>Another question is this: is *every* difference between VMS and UNIX
>essential to the superior paging performance of VMS on VAX? In other
>words, *how much* UNIX do we have to give up to get that paging
>performance, compared to how much we are giving up?

Paging performance was just an example.  The VMS compilers seem to
produce better code than the UNIX equivalents, at least FORTRAN.  It's
not that UNIX is stupid, but that any hardware supplier has a vested
interest in maximizing performance.  It makes their hardware look
better, plus they make more money selling their own software.  They
are likely to spend a lot more effort on performance, in particular by
taking advantage of any special features their hardware has.  They
might even write their kernel in assembly language.

>Ah, but not all incompatibilities provide any clear-cut benefit.
>Indeed, I think the opposite situation is more common. Consider the
>proprietary minicomputer market. Can any truly disinterested observer
>claim that all the fragmentation there leads to any significant
>functional differences? I'm certain some exist, but in the real world
>can they really justify the massive loss to users caused by reduced
>software availability, porting problems, heterogeneous network
>nightmares, and so on?

I understand your complaint.  I am not saying you are right or wrong,
but I think I know how it got that way:

If your product is not differentiated in some useful way, the only way
you're going to sell it is through price wars, and you end up with
terrible margins.  (Note that a thick catalog of third-party software
is a form of product differentiation.)

Suppose instead you choose to go for a functionality improvement for
product differentiation, like maybe you decide to make a vector
workstation.  Clearly you can't use the UNIX compilers, debuggers,
etc.  because they don't support vectors.  So you need to supply your
own.  Unfortunately, others making vector workstations concurrently
will make their own, and the architectures will be different because
there will be no communication between these soon-to-be-competitors.

Alternatively, you could try a different approach like getting
agreement between all interested companies on what the new vector
architecture is going to look like, get support into the compilers in
a standard way, then produce the machine.  However, it will take much
more time, and when it comes out, it will be surrounded by everyone
else's undifferentiated vector workstations, and we're back to where
we started.

It would surely be a great thing [for users] if all machines were
compatible, but due to business reasons like these they never will be.
I think that UNIX has done a teriffic job of uniting all these vastly
dissimilar machines, given the sheer difficulty of it.  I'm sure that
a lot of the internals of UNIX could be sped up to some extent in a
compatible way, but I'll bet the suppliers are facing long lists of
requested functionality improvements too.
-Stan