[comp.sys.apollo] DN 10000

mathieu@ists.yorku.ca (Pierre Mathieu) (12/06/88)

We are considering buying an Apollo DN10000 workstation.
Would anyone out there have anything enlightening to say
about these machines, for instance what are their advantages
over Sun Products? If you have
purchased this new product, has it lived up to your 
expectations? Has it measured up to Apollo's claims?
	Any comments will be greatly appreciated,

Pierre Mathieu                                 mathieu@ists.yorku.ca
Institute for Space and Terrestrial Science    mathieu@yunexus.yorku.ca
Center for Research in Exp. Space Science
York University, Ontario, Canada.

moj@tatti.utu.fi (Matti Jokinen) (12/19/88)

> We are considering buying an Apollo DN10000 workstation.
> Would anyone out there have anything enlightening to say
> about these machines, for instance what are their advantages
> over Sun Products?

Apollo 10000 is primarily a powerful single-user number cruncher;
Sun 4 is considerably less specialized.  In particular, Apollo 10000
is not a good time-sharing system for several reasons.  First, the
number of processes is limited to 64 (the limit will be raised to 128
in SR10.1 but that is still not very much).  Second, each process
reserves reserves about 5 megabytes of disk space for swapping; thus
64 processes would use more than 300 MB.  Third, there are no disk
quotas.

Although the speed of arithmetic operations is impressive and other
instructions are pretty fast too, compilers are surprisingly slow.
This makes the system less suitable for program development.

> One of the things that is slowing our decision down though
> is the lack of information we have so far on whether or not
> we can set up diskless workstations on an Apollo controlled
> network.

It should be possible according to the manuals.  An executable file
can contain code for two different processors (M680xx and PRISM).
I don't know how well it works in practice.

oj@apollo.COM (Ellis Oliver Jones) (12/21/88)

In article <146@tatti.utu.fi> moj@tatti.utu.fi (Matti Jokinen) writes:
>  ...each process
>reserves reserves about 5 megabytes of disk space for swapping; thus
>64 processes would use more than 300 MB.  

We're working on this.  The amount of swap space
per process will drop quite a bit (dare I say
"dramatically?") when we finish fixing it.

>Although the speed of arithmetic operations is impressive and other
>instructions are pretty fast too, compilers are surprisingly slow.

If you disable optimization, the prism compilers 
will generate code substantially faster.  The unoptimized code
is also easier to understand with the "dde" debugger (optimization
often changes the order of execution to improve runtime performance).

   In /bin/cc, leave off the -O option.
   In /com/cc, specify the -opt 0 option.
   Likewise for ftn and pas.

It also helps compile performance if
you omit the generation of expanded listings.

It makes sense to disable optimization when debugging your logic.
Obviously, when your program is debugged and ready for
production, it's a good idea to compile it with optimization
enabled.

I just got hold of the next round of dripping-wet new prism compilers.  
Their compile-time performance is much improved;  the slow part
was the compiler's search for optimal object-code sequences.  Ordinary 
program optimization techniques (find the hotspots and recode 
them to be fast) have worked wonders. (You don't want the new 
compilers just yet, I promise you;  let them get through testing first!)

They're so much faster that they confused me:  I wasted some time 
making sure somebody hadn't managed to turn off optimization by 
mistake in our build scripts.

With any luck this good stuff will be available on the sr10.1.p release
along with the 128-process limit, but I'm not in a position to commit to that.

>> One of the things that is slowing our decision down though
>> is the lack of information we have so far on whether or not
>> we can set up diskless workstations on an Apollo controlled
>> network.

As far as a network of 68K workstations goes the answer is
"Yes, Diskless Workstations Work."  We do this all the time
here, for various reasons.  Notice that the "mother" server node
and the "daughter" diskless node have to be on the same
local area network (token ring, ethernet,...)--you can't boot
diskless over a network bridge.  The mother must also contain
the /sau<n> directories needed by all daughters.  SR10[.x]
installation procedures ask you the necessary questions
to let you set this up if you need to.

We put lots of time and money into making sure all this works well.
"Yes" is the answer.  

As far as using a DN100x0 for a server is concerned, the
answer is also "Yes, Diskless Workstations Work."  However,
during installation you must make sure that the DN100x0 
gets both kinds of code--prism and m68k.  We're putting time
and money into making sure this installation works easily, right now.

>An executable file
>can contain code for two different processors (M680xx and PRISM).
 
Correct, there's a new file type called "cmpexe" (for "compound executable").
It looks sort of like a Un*x "ar" file, in that it can contain
several different tagged members.  Most cmpexe files
contain members tagged m68k and a88k, for Moto and Prism executables,
respectively.  There's an "xar" utility to make and maintain these
files.

Heterogeneous distributed DSEE (Domain Software 
Engineering Environment) building--

    DSEE> set builder //prism1 //prism2 //prism3 //moto1 //moto2 //moto3 

is wonderful.  You use cmpexes for all your tools, and it just works.  Some
of the compiles run on Prisms, and others on M68K nodes, and nobody
cares which.

/Ollie Jones (speaking for myself, not necessarily for Apollo Computer, Inc.)
             (yes, I like working with DN100x0 nodes!)
             (no, I can't get one for my office...I wish...)