[net.unix-wizards] Seeking a Development Environment

jeff@lpi.UUCP (09/19/86)

At LPI, we are always investigating new approaches to our development
environment.  The PROPOSITION section below has a specific scenario 
that we'd like information on, but general suggestions are also welcomed.  
I will post a summary of responses if there is interest.

BACKGROUND:  LPI is a quickly growing software company with a family of
compilers.  We are looking at options for our development system.  The 
system must be easily expandable to support planned growth.  Our ideal 
main system would be a 68k UNIX box.  Since we have to support at least 
30 to 60 users, we need a lot of horsepower.  We also need a central 
file repository that all have access to.

PROPOSITION:  Among the many machines we've had experience with is
the Sun workstation.  It would be too expensive to have to buy a Sun for
everyone in house; furthermore, the incremental costs as we grow would
be too high.  However, someone suggested the following "system":

- A big central Sun fileserver with lots of disk for the central
  repository of files.  (with 8 or 12Mb of RAM)
- One Sun node for every five or six users. (all Sun 3s)

Users would be tied to a node with a conventional terminal.  

Everyone we've talked to has a "one Sun/one user" system.  Has anyone
tried using a network of Suns as we suggest?  What sort of problems 
are we likely to find?  Would the fileserver be able to support 30 to
60 users?  What happens when we double in size?  What about the
efficiency of the nodes when more than one person is on them?  How
much disk should be on the multi-user nodes?  Should they be diskless,
given the central fileserver?  How much do you lose (use of good Sun
software, general productivity, response, etc.) by having a conventional 
terminal instead of one of the graphic screens?  What is the capacity 
of a Sun 3 to support several users?  Does anyone have a working 
example of this sort of system?

This is a real problem.  Cost, speed, access to a large set of central
disks, and expandability are real important.  A 68000 Unix box is a 
strong requirement.  (Thus a VAX, and to a lesser extent Apollo nodes, 
for example, are not strong possibilities.)  

Any suggestions are gratefully accepted.  Thanks in advance.
-- 
| Jeff Diewald  Language Processors, Inc. (LPI)	   
| 400-1 Totten Pond Road, Waltham, Mass.  02154	    
| UUCP: ...{linus|harvard}!axiom!lpi!jeff         

elg@usl.UUCP (Eric Lee Green) (09/22/86)

In article <174@lpi.UUCP> jeff@lpi.UUCP writes:
>
>BACKGROUND:  LPI is a quickly growing software company with a family of
>compilers.  We are looking at options for our development system.  The 
>system must be easily expandable to support planned growth.  Our ideal 
>main system would be a 68k UNIX box.  Since we have to support at least 
>30 to 60 users, we need a lot of horsepower.  We also need a central 
>file repository that all have access to.

I've heard of a company called Alpha Microsystems that provides
something like that. I've heard of networks of AM machines that could
handle thousands of users... However, I'm not really familiar with
either the company or its products, maybe someone on the net can fill
both of us in on the details. I'm pretty sure it runs Sys. V. and is
mainly intended as an alternative to a mainframe for small to medium
sized companies, not certain what kind of programming environment it
offers (since it's aimed mostly at things like accounting and
inventory).

>- A big central Sun fileserver with lots of disk for the central
>  repository of files.  (with 8 or 12Mb of RAM)
>- One Sun node for every five or six users. (all Sun 3s)

From what I've heard, you can only put 3 or 4 users on a Sun 3 before
it starts degrading quite ungracefully. Again, I'm sure we'll hear
more from the denizens of net.unix (I saw my first Unix machine only a
year ago,so obviously I'm not a Unix guru -- yet). I'm sure that cc
and make and emacs won't care that you're not running sunwindows on
the terminals.

>| UUCP: ...{linus|harvard}!axiom!lpi!jeff         
-- 

      Eric Green {akgua,ut-sally}!usl!elg
        (Snail Mail P.O. Box 92191, Lafayette, LA 70509)

" In the beginning the Universe was created. This has made a lot of
 people very angry and been widely regarded as a bad move."

dave@dlb.UUCP (Dave Buck) (09/25/86)

Arete Systems, San Jose, Ca, has a 68000 or 68020-based computer system
which can have 1-4 68000's or 68020's as the main processors, up to 16Mb
of memory, up to 11 separate comm controllers with 8 async ports per
controller (expansion to more than 88 terminals may be announced soon,
I hear), and allows a large variety of tape and discs to be attached.
I personally saw one of their systems run 120+ users for a Govt bid demo,
and we have one in house for our main development machine.  We like it.
-- 
Dave Buck	(408)972-2825	dave@dlb.BUCK.COM, {amdahl,plx}!dlb.UUCP!dave
D.L.Buck&Assoc.,Inc.	6920 Santa Teresa Blvd.		San Jose, Calif.95119

hwe@lanl.ARPA (Skip Egdorf) (09/27/86)

> In article <174@lpi.UUCP> jeff@lpi.UUCP writes:
> >
> >BACKGROUND:  LPI is a quickly growing software company with a family of
> >compilers.  We are looking at options for our development system.  The 
> >system must be easily expandable to support planned growth.  Our ideal 
> >main system would be a 68k UNIX box.  Since we have to support at least 
> >30 to 60 users, we need a lot of horsepower.  We also need a central 
> >file repository that all have access to.
>  ...
> >- A big central Sun fileserver with lots of disk for the central
> >  repository of files.  (with 8 or 12Mb of RAM)
> >- One Sun node for every five or six users. (all Sun 3s)
> 
> From what I've heard, you can only put 3 or 4 users on a Sun 3 before
> it starts degrading quite ungracefully.
>   ...
>
> >| UUCP: ...{linus|harvard}!axiom!lpi!jeff         
> -- 
> 
>       Eric Green {akgua,ut-sally}!usl!elg
>         (Snail Mail P.O. Box 92191, Lafayette, LA 70509)
> 
> " In the beginning the Universe was created. This has made a lot of
>  people very angry and been widely regarded as a bad move."

A few years ago, I replaced a VAX-750 4.2BSD system [8MB, ABLE DHDM,
2 Emulex Massbus controllers with Fuji 2294s and CDC 9766], with a
Sun 2/120 FS [Sun 1.6, 2MB, Systech 1600, Xylogics 450, 2 Fuji 2322s].

Despite those who said it couldn't be done, both the VAX and the Sun supported
12 VT-100s logged in, with 2-3 active at one time.
The system load was almost all RTI Ingres (Version 2.0).

The users didn't complain (they would have had the Sun been slower
than the VAX).
The only real benchmark that I have is that to load one database from
text files (including all the 'modify to isam on...' stuff) took around
6 hours on the 8MB, two disk controller VAX, and took 3 hours 10 minutes
on the 2MB Sun 2.

I am sure that this difference was due to 4.3-type performance enhancements
getting into the Sun OS, rather than a 2-1 performance factor of 68010s to
VAX 750s.

From this experience, I am confident that to replace a VAX with a TIMESHARING
Sun is very possible.
A Sun 2 is every bit of a VAX 750.
If you have a VAX 780 with 32 users and a load average of 2-3, a Sun 3 should
handle you nicely.
Looking over my Sun3/75's screen, I seem to have enough windows to account for
12 logged-in VT-100s. Of these, 4 are 'active' in some way.
Don't forget, that one reason that these workstations are popular is that
my productivity is not the same as when seated at a VT-100.
This is due to my ability to burn more computing power on my behalf.
That computing power CAN be burned by single VT-100s if the job requires...

				Skip Egdorf

These opinions are my own, not Los Alamos National Laboratory's.

I don't work for Sun. I Don't own any Sun stock.

bzs@BU-CS.BU.EDU (Barry Shein) (09/27/86)

From: "Jeff Diewald  (...!linus!axiom!lpi!jeff" <jeff%lpi.uucp@BRL.ARPA>
>Everyone we've talked to has a "one Sun/one user" system.  Has anyone
>tried using a network of Suns as we suggest?  What sort of problems 
>are we likely to find?  Would the fileserver be able to support 30 to
>60 users?  What happens when we double in size?  

BU-CS.BU.EDU is now 3 SUN3/180 systems, each with 8MB, 16 terminal
ports (std SUN/MTI Mux), two Eagles (one tape, one with two ethernet
boards, two FPAs soon.) The others are (rarely) known as BUCSD.BU.EDU
and BUCSE.BU.EDU.

We typically have 12-20 users per system, usually around 40 across the
three (note, there can be more users than tty ports because we always
have a few, like me, who come in via rlogin or telnet.)

We're happy with the arrangement, it's the primary system for Computer
Science Research at Boston University.

All disks are cross-mounted via NFS and uniquely named (/usr1.../usr13.)
It doesn't matter to a user which system s/he logs into, s/he'll always
have the same home directory and the same view of the system.

We are running one ND SUN3/50 and one SUN3/75 with a shoe-box (71MB SCSI),
we're about to add two more ND SUN3/50s, I wouldn't want to push it too
far with all the interactive users but I suspect two to four (two on each
disk) per system will be fine for a total of six, of course when someone
gets a diskless node we lose a user so we could probably go further, or,
more likely, turn one of the three 180s into a file-server only, whatever.

The biggest problem to date is that if one of the systems goes down
unexpectedly the other two tend to hang because there will usually
be at least one user on each sys who's files are on the one that went
down. No big deal, either it's right back up or a little edit of fstab
and a re-boot (and an apology to affected users) fixes it all up. The
systems are reliable enough that this has not in fact been much of a
problem.

For a while we ran with 4MB awaiting a memory expansion board, don't
do it, 8MB is fine, 4MB can be awful but memory is cheap, I'd like to
put some more on tho it's hardly pressing right now.

If I were to do it today I would probably start with SUN3/280s, the
4MIP version, but this is really quite adequate, I'm an iron pig.

As I've said before, feel free to 'f @bu-cs.bu.edu', @bucsd.bu.edu,
@bucse.bu.edu [192.12.185.8, it's not in the htable yet] if you have
arpanet access, our finger daemon prints a cross between finger and 'w'.

Notes: Mail always goes to USER@BU-CS.BU.EDU and always appears to be
originating from there. We keep multiple copies of most binaries local
on each system as I believe this reduces NFS overhead a bit. One nice
thing is that if one system gets in trouble (or just needs a major
update) it's trivial for us to boot one server diskless off the other,
the disks being at that point accessible but un-mounted.

The nicest thing is that, as expected, it got the faculty started with
a (good) vanilla 4.2bsd system (which they wanted) and gives them time
to (easily) grow into their own workstations as grants etc come
through, thus promoting a reasonably uniform environment.

	-Barry Shein, Boston Univeristy

bzs@BU-CS.BU.EDU (Barry Shein) (09/27/86)

>From what I've heard, you can only put 3 or 4 users on a Sun 3 before
>it starts degrading quite ungracefully.

You hang around with the wrong people. We put upwards of 16-20
on our SUN3/180s and its fine, I'll admit that they're not all
active at once but certainly more than 3-4 usually are. Can't
wait till the SUN3/280s come in, I'm expecting close to double
that, we'll see.

	-Barry Shein, Boston University

giebelhaus@umn-cs.arpa (09/30/86)

My experience is that Apollo is much better for professional software
development and Sun is better for hacking.  I haven't had enough experience
with any other UNIX workstations to say anything about the others.

I find Suns are much harder to maintain than the Apollos (I have a five 
page report on it if you are interested).  If you have a bunch of cheap
labor (like at a university) this may not be a problem, but it is a real
problem for me.  I find that Apollo's beta releases are better than 
Sun's standard release's.  Other things such as much of the network hanging
when a server node goes down until you edit fstab and reboot really bothers
me.  You can get NFS for the Apollo, but I don't believe that Apollo wants
to release it because of some reliability problems like the one just
mentioned and the lack of file locking.  Apollo is putting their resources
in RFS instead.

I have a hard time getting software support from Sun.  I am not a great
UNIX wizard, yet, and I surly don't understand the internals of UNIX.
If I really got into the guts of the stuff, Sun might be easier since
the lower layers of SunOS are closer to bsd than DOMAIN/IX.  Still, every
modual of the SunOS has been modified.  From Sunwindows to Yellow pages,
the operating system is different than bsd.  

Both Apollo and Sun are coming out with some real high power nodes that
will support many users.  I wouldn't want to try to sell a VAX 750 after
they do.

Prices of the two systems are about the same.  With Apollo, you don't
need the server (I run every other node diskless with very good results).
When you take the cost of the server into account, it really makes a 
difference.  

Another thing that makes me worry about buying Sun is that they don't
seem interested in "Open Systems" like they used to be.  The UNIX source
was free and relatively easy to get from Apollo.  I understand the SunOS
source costs a bundle and even then they don't like to give it out 
(havn't checked it out, though).  Much of the SunOS is getting propietary
so I don't know about it becoming a standard.  Also, they haven't let
the IEEE committee for UNIX look at their stuff.  To me, Apollo looks
much more open.

We find it a hard and somewhat religous choice between the two.  If you
have any questions, I would be happy to help.  I wouldn't want anyone
to go through the problems I did.

guy@sun.uucp (Guy Harris) (10/15/86)

(Comments on statements in the original article, rather than on the reply,
are in parentheses.  We haven't received the original article yet; USENET is
a datagram service, which is why the "notes" idea of linking related
articles together, while a nice idea, is not a panacea.)

> >Other things such as much of the network hanging
> >when a server node goes down until you edit fstab and reboot really bothers
> >me.  
> 
> Why did you have to edit fstab, unless to bring up a replacement
> file server? I suppose you would have to if you wanted to
> have another server provide the /usr/bin /bin and /ucb/bin directories.
> Or the swapping partitions (this is ND, not NFS by the way).

If you're getting most of your programs (/bin, /usr/bin, etc.), a diskless
machine will probably not be very happy if the server goes down, unless 1)
it has a huge (directory entry, inode, disk block) cache, has all the data
it needs locally, and doesn't time out entries in the cache, or 2) can
automatically switch to a backup server, regardless of what OS it's running.

If he's talking about other file systems he's mounted, he can mount them
"soft", in which case attempts to access those file systems when the system
supplying is down will time out rather than retrying until the server
responds.  You pays your money and you takes your choice:

	You can get "guaranteed" service from a server, in the sense that
	the client will continue indefinitely to retry requests; however,
	this means all processes on whose behalf those requests are made
	will hang until the server comes back up.

Or

	You can tell the client to give up after some number of retries,
	so that it won't hang indefinitely; however, this means any
	process trying to access a file on a dead server will get I/O
	errors.

(In SunOS 3.2, you can turn on the "interruptible I/O" flag for hard mounts;
in this case, you can, at least, abort the program trying to access that
file.)

> >You can get NFS for the Apollo
> 
> I have heard from some Apollo people that they will NEVER support NFS.
> Also, I believe they have demonstrated some limited type of NFS remote
> mount, but this is not the same thing.

What Apollo has is client routines that plug into their system using their
object-oriented file system, coupled with server processes that run (In user
mode, I believe) on other machines.  As far as I know, they do not use NFS
protocols for this.

> > but I don't believe that Apollo wants
> >to release it...

(I was a bit confused by his comment here; if Apollo hasn't released this
alleged NFS implementation, how can you get it?)

> Re: File locking. I would suppose that NFS file locking will be added 
> eventually.

"Fixed in 3.2".  The record locking "fcntl" calls that appeared in System V,
Release 2, Version X (for some value of X dependent on which machine you're
talking about) are supported, as well as the "/usr/group" "lockf" routine
built on top of them.

> But I have a dumb question. Unix didn't have file locking
> for years, yet people got around it by either creating another `lock'
> file and testing for the existance of it, or by using a specialized 
> device driver to provide synchronization of different processes.
> Why is file locking the BIG DEAL? It seems to me that all of the vendors
> who have their own proprietary scheme knock NFS because of this, yet
> Unix has been lacking in the `niceties' of file locking for years.
> Is this just Sour grapes?

If you are frequently locking and unlocking objects, the use of a lock file
does hurt performance.  (FYI: unless you have the 3-argument "open" that
first appeared in S3, and was picked up by 4.2BSD, you should not use file
creation as a locking mechanism unless processes running with super-user
privileges don't have to do locking.  Traditionally, it was done by creating
a lock file and making links to that file in order to lock something.)
Adding a special lock device driver is not conceptually any different from
adding a special lock system call; the only difference is the way the
locking code plugs into the system.

> >Still, every modual of the SunOS has been modified.  From Sunwindows
> >to Yellow pages, the operating system is different than bsd.

(Well, yeah; for one thing, it has to run on machines that aren't VAXes.  For
another, if you want to put something like NFS into the system in a clean
way, you do have to reorganize the system code quite a bit.  And as for
SunWindows, the kernel changes for it are mostly additions, rather than
changes; would you rather not have a window system?

This is only completely true in the trivial sense; yes, we did add our own
SCCS headers to every UNIX source module.  Lots of source modules, even in
the kernel, are essentially unchanged.)

> You cannot buy a board that comes with a Unix device driver and install it
> in an Apollo.

To be fair, you can't necessarily do this with a Sun, either.  The notion of
a portable device driver is about as valid as the notion of a portable
kernel or a portable C compiler; yes, a lot of the code goes over without
much change, but when you have to deal directly with the hardware you have
to replace a lot of code with your own code.

> The data structures (as normally documented in /usr/include) are not
> the same.  This is useful, not only with device drivers, but with programs
> like ps(1) that read the internals of the system memory (using /dev/kmem)
> to find out what is going on inside. I have taken courses on the internals
> of Unix, and programs that examine the file system, user areas, etc.
> ARE useful. I would guess that the public domain program `top' won't
> run on an Apollo very easily.

Again, to be fair, nothing is "documented" in "/usr/include".  There is no
published interface to the system internals; they are subject to change
without notice.  Yes, those programs are useful, but there is no reason to
guarantee that they will work without change in all releases of an OS.
Sometimes the cost of not changing a system data structure far outweighs any
possible benefits leaving it alone could bring.

> They even provide programs that make it easier to debug (like adbgen).

(Plug warning) Or "kadb", in 3.2.

> >Prices of the two systems are about the same.  With Apollo, you don't
> >need the server (I run every other node diskless with very good results).
> 
> Are you saying that Sun's need a server, while Apollo's don't?
> I don't understand.

Perhaps by "every other node", he means that an even-numbered node is both
server and workstation, and an odd-numbered node is a client of node N-1
(you can tell I've been a C programmer for too long, I'm calling the first
node node 0).

You can set up a Sun with a keyboard and monitor as a server, so you can do
the same thing with Suns.  We don't run that way here, and I've never seen
that kind of setup with Apollos, so I can't say how well a Sun server would
work as a workstation compared to an Apollo serving the same dual role, or
how big the administrative burden would be for the users of the two
machines, or how much disk space you'd have to add to the dual-use machines.

> >I understand the SunOS
> >source costs a bundle and even then they don't like to give it out 
> >(havn't checked it out, though).

There is some stuff we don't provide.  Most of the source is available,
although I don't know what our policies are on it.

> >Much of the SunOS is getting propietary
> >so I don't know about it becoming a standard.  Also, they haven't let
> >the IEEE committee for UNIX look at their stuff.  To me, Apollo looks
> >much more open.
> 
> 	This is a very strange comment.

It is not only strange, but uninformed.  There are several people from Sun's
system software group working with the IEEE committee.  We have made joint
proposals with HP on terminal driver and reliable signals issues.  We might
not let them look at our source code, but some of them may work for
vendors that don't have UNIX source licenses, and AT&T wouldn't *let* us let
them look at it.  Does the person who makes this claim have examples of
cases where we *didn't* let the 1003.1 committee look at our stuff?

I'm also curious what his definition of "proprietary" is.  We're not going
to sue somebody who sticks, say, a "getdirentries" system call into their
system based on our manual page.  Arguably one of the biggest changes we've
made to the UNIX internals is the "vnode" architecture for supporting
multiple file system types; we provide this to anybody who buys NFS.

> 	And besides, there is a BIG difference in the
> 	interface to a system and the internal implementation.

AMEN.  The IEEE committee is *not* developing an implementation
specification, it is developing an interface specification.  The gory
details of how some particular feature is provided in some particular system
are not directly relevant to this effort in most cases.  Unfortunately,
given how widely available UNIX source is, a lot of people haven't learned
the difference between an interface and an implementation.
-- 
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com (or guy@sun.arpa)

paul@umix.UUCP ('da Kingfish) (10/16/86)

In article <935@kbsvax.steinmetz.UUCP> barnettb@steinmetz.UUCP (Barnett Bruce G) writes:
>In article <4254@brl-smoke.ARPA> giebelhaus@umn-cs.arpa writes:

[Giebelhaus has reasons why he likes Apollos, Barnett why he likes Suns.]

I am an Apollo user, having chosen Apollo over Sun and Dec.  I find the
Apollo kernel to be different than the traditional Unix kernel, and by
and large better.  And, executable images for Apollos are different
than a.outs.  And there is no /dev/kmem.  (Although I think that is
definitely a step in the right direction ... i.e., the 1980s.)

Those differences loom large in some people's minds.  It did take me 
a while to get up to speed on Apollos.  Without exchanging nitpicks
(Apollo has A, Sun doesn't, and vice versa) I can say this much:

There hasn't been a program we have tried to port from a Vax to an
Apollo that hasn't made it (we like to have our local mods on the
Apollos as well as Vax and Sun).  Some are pretty easy, like sendmail and
uucp.  Others take more time, like X (and I am pretty sure NFS runs on
Apollos).  Problems that crop up reflect non-portable code, and suspect
programming practice, much more than Apollo problems.  When we have
discovered a real problem, like a bug, or unacceptable performance,
Apollo has always fixed it.

The stuff I do is pretty much non-graphics oriented, so I can't
speak to that, one way or the other.

The idea that Apollo's Unix is an "emulation" or a "layered product"
(like Eunice, for example) is really getting to be tiresome, and is a
canard.  It is layered in the same way that there is a Unix that people
see layered on top of a Unix kernel.  As I mentioned above, they do
have a different kernel, with their own OS interface.  That can be
risky to do, when other vendors trade on the safety of adherence to the
beliefs of the past, and other signs of orthodoxy, like /dev/kmem.

Apollo does "hide" some internals, in that you can't grope through kmem.
Sometimes information hiding is OK, and in some contexts that is considered
to be desireable.

Sun has been very innovative, no doubt about it, and so has Apollo.
And, I guess there are some Apollo things that bug me, nobody's
perfect.  (And canfieldtool is my kind of application -:))

>	It has been a year since I have last used an Apollo ...
I guess so.
>	But I feel that the difference in the companies
>	is primarily in attitude. And I don't like Apollo's attitude.
s/Apollo/Sun/, but that just goes to show you, there's no accounting for
taste.


--paul

paul@umix.cc.umich.edu
...ihnp4!umich!paul

(oops, I almost forgot ... these are my own opinions, and it usually costs
a beer to get them!)

barnettb@vdsvax.uucp@ndmce.uucp (Barnett Bruce G) (10/17/86)

In article <4254@brl-smoke.ARPA> giebelhaus@umn-cs.arpa writes:
>My experience is that Apollo is much better for professional software
>development and Sun is better for hacking.  

Apollo's do have some advantage in Programming in the Large.
Especially for Apollo-based end-user software environments.

One thing I don't like about Apollo is the intention to `lock you'
into an Apollo domain (pun intented :-).
Things like their pascal (which has been enhanced so much it looks like C),
backplane, operating system, windowing and networking ...

Apollo seems very interested in providing software that keeps you on Apollo
machines, while Sun is always pushing for, and supporting, new standards.

Apollo supports standards, when they have to. It is a very different
philosophy.

>Other things such as much of the network hanging
>when a server node goes down until you edit fstab and reboot really bothers
>me.  

Disconnecting Apollo's Ring network causes a few problems, too :-)

I have had a server supporting 17 diskless Sun 3/75's go down (power fail)
and auto-boot several times, without serious problems.
I did have to run fsck on the client root file system once or twice,
but most of the time the users just waited a few minutes, and voila!
they continued, with NO WORK LOST!

Why did you have to edit fstab, unless to bring up a replacement
file server? I suppose you would have to if you wanted to
have another server provide the /usr/bin /bin and /ucb/bin directories.
Or the swapping partitions (this is ND, not NFS by the way).

>You can get NFS for the Apollo

I have heard from some Apollo people that they will NEVER support NFS.
Also, I believe they have demonstrated some limited type of NFS remote mount,
but this is not the same thing.

> but I don't believe that Apollo wants
>to release it because of some reliability problems like the one just
>mentioned and the lack of file locking.  Apollo is putting their resources
>in RFS instead.

Because it isn't a product sponsored by their competitor.

Again, what reliability problem are you referring too?
A stateless implementation is a lot more reliable than a statefull
implementation.

Re: File locking. I would suppose that NFS file locking will be added 
eventually. But I have a dumb question. Unix didn't have file locking
for years, yet people got around it by either creating another `lock'
file and testing for the existance of it, or by using a specialized 
device driver to provide synchronization of different processes.
Why is file locking the BIG DEAL? It seems to me that all of the vendors
who have their own proprietary scheme knock NFS because of this, yet
Unix has been lacking in the `niceties' of file locking for years.
Is this just Sour grapes?

>I have a hard time getting software support from Sun.  I am not a great
>UNIX wizard, yet, and I surly don't understand the internals of UNIX.
>If I really got into the guts of the stuff, Sun might be easier since
>the lower layers of SunOS are closer to bsd than DOMAIN/IX.  Still, every
>modual of the SunOS has been modified.  From Sunwindows to Yellow pages,
>the operating system is different than bsd.  
>
Not nearly as different as Domain/IX is from ANY Unix machine.
I keep hearing people say that Apollo has Unix.

They don't. It is an emulation.

You cannot buy a board that comes with a Unix device driver and install it
in an Apollo. The internals of Domain/AEGIS are different.
The data structures (as normally documented in /usr/include) are not the same.
This is useful, not only with device drivers, but with programs like
ps(1) that read the internals of the system memory (using /dev/kmem) to find out
what is going on inside. I have taken courses on the internals of Unix,
and programs that examine the file system, user areas, etc.
ARE useful. I would guess that the public domain program `top' won't
run on an Apollo very easily.

I was under the impression that Apollo is still hiding the internals from
everyone. Do they even have a "/dev/kmem" ? It also seemed to me that
programs making use of the object image (nm, size, strip, dbx)
are not the same. Sun provides documentation on adding device drivers
(or pseudo-device drivers) to the kernel. They even provide programs that
make it easier to debug (like adbgen). Apollo provides a package that
allows users to interface to their hardware, but this is a rigidly
controlled interface. It may be easier to use, but is slower than a
real device driver (the interrupt latency time is very high), and non-portable.

Also, the Apollo's are very workstation based. The debugger won't run
on a TTY. Other programs are also workstation-only based, when they shouldn't
be. How do you debug a program over a phone line?
In fact, I don't KNOW for sure what standard Unix programs are missing
on an Apollo, or can't be used over a phone line/TTY. Wouldn't
it be a bummer if man(1) doesn't work? It probably does. But I
wouldn't assume that Aegis has every utility that you are used to.
Or that it has the same output. 

Sun's have a standard architecture. Any Sun can have a TTY or a graphics
device as a master console. Most graphics software is layered
over a TTY tool. Examples are MAILTOOL and DBXTOOL, along with the
chess and backgammon demo's.

This strikes me as following the `Unix Philosophy'.
Tools that use tools, you know.

And the system administration on Aegix is not UNIX. 
As I recall, scripts that add a user to the password file
don't work the same way (or at all).
I admit that there are differences between SYS V, 4.2 bsd and 4.3 bsd.
But there are a LOT of books available for people who need help.
And you can go to seminars on system administration.
But Apollos's administration is nothing like the others.
In some cases is it probably easier. But different.

You can take courses on Unix. From several people besides AT&T.
And you can get books on the internals (like Bach's DESIGN ON THE UNIX
OPERATING SYSTEM).

I would much rather have a system that is well known and available
on several machines than one that is only on one vendor's product.
Unix is tough enough to learn when there is a stardard.

>Prices of the two systems are about the same.  With Apollo, you don't
>need the server (I run every other node diskless with very good results).

Are you saying that Sun's need a server, while Apollo's don't?
I don't understand.

>When you take the cost of the server into account, it really makes a 
>difference.  

It makes a bigger difference when you have 12-15 diskless nodes per server.
Sun has invested in diskless node technology. It works.
I am not convinced that Apollo has done the same. Maybe they can demonstrate
a diskless node or two, but I have heard of diskless benchmarks that
show ~7 diskless Sun's having a performance degradation equivalent
to 1 or 2 diskless Apollo's. And it seems to me that Sun's are usually
faster for the same price as an Apollo, so the price/performace makes
a BIG difference.  I would think that when you compare a 10-15 station 
cluster, Sun's would give you much faster machines at a much cheaper price.

>Another thing that makes me worry about buying Sun is that they don't
>seem interested in "Open Systems" like they used to be.  The UNIX source
>was free and relatively easy to get from Apollo.  

 Maybe you get what you pay for? :-)

>I understand the SunOS
>source costs a bundle and even then they don't like to give it out 
>(havn't checked it out, though).  

The prices are similar to the sources for other versions of Unix.

>Much of the SunOS is getting propietary
>so I don't know about it becoming a standard.  Also, they haven't let
>the IEEE committee for UNIX look at their stuff.  To me, Apollo looks
>much more open.
>

	This is a very strange comment. Don't you know about the Sys V/bsd
	merging? Also, see my previous comments.
	Maybe they are careful about the internals. But If I had a system
	that TRULY supported diskless clients, I would not want to give away
	the technology for free. And besides, there is a BIG difference in the
	interface to a system and the internal implementation.
	The interface is documented in the manuals you can purchase.
	In fact, more of the internals of SunOS is documented that that of
	Aegis/Domain.

>We find it a hard and somewhat religous choice between the two.  If you
>have any questions, I would be happy to help.  I wouldn't want anyone
>to go through the problems I did.

	Maybe I tend to get religious too. I have been burned badly by
	being locked on to a single vendor's platform. Apollo is just
	like all of the others. Sun is a breath of fresh air. They
	are balanced on a two-edged sword of standards. It is as easy to
	jump onto a Sun bandwagon as jump off. 
	Also, it is harder to be burned by a Sun (standard) platform 
	than an Apollo platform. I might hazard a guess that some of
	Apollo's OEM's are finding out how difficult it is to move to other
	workstations.

	Also, I have followed Sun and Apollo's direction for years,
	and it seems to me that Sun has a definite goal, and is going
	towards their goal with a very efficient team of people.
	Apollo seems to be lacking in direction. How many different
	architectures are they supporting? Which options are available with
	which machines? Which bus is used? Which Windowing standards?
	Which networking scheme?

	It has been a year since I have last used an Apollo, and they have been
	improving the product, so my memory may be faulty in a few spots.
	But I feel that the difference in the companies
	is primarily in attitude. And I don't like Apollo's attitude.
	
	Bruce Barnett 
	chinet!steinmetz!vdsvax!barnettb

{These are my own opinions, guesses, hunches, and mistakes. Please excuse}

mishkin@apollo.uucp (Nathaniel Mishkin) (10/20/86)

I'll spare the feint of heart by saying up-front that (a) this is yet another
in the string of Apollo vs. Sun messages, (b) I work for Apollo Computer,
(c) this is not a sales pitch, (d) this is not an official message from
Apollo Computer Inc.  It you just can't stand it any more, please feel free
to go to the next article.

In article <935@kbsvax.steinmetz.UUCP> barnettb@steinmetz.UUCP (Barnett Bruce G) writes:

    Also, I believe they have demonstrated some limited type of NFS remote
    mount, but this is not the same thing.

I can't comment on what Apollo will or won't do, but I'll just take this
opportunity to point out that what we HAVE done is build a framework
so that you can build client-side support for any remote file access
scheme of your own choosing or creation.  This framework is a product
and we even ship an example (demonstrative, not complete and efficient)
of how to use it to allow transparent remote file access to remote 4.2bsd
systems.

The framework is, I would claim, a better (dare I say "more open") thing
to have than simply NFS.  One could build NFS support using the framework.
One could build ISO/FTAM support with the framework.  One could build
something that FTP'd a whole file over to your machine on open and back
on close, if that's the best you could squeeze out of an uncooperative
remote system.  Better that then nothing.

As far as server-side support goes, writing servers (especially stateless
ones) that do the things that remote clients ask it to do just isn't
that hard.

    But I have a dumb question. Unix didn't have file locking for years,
    yet people got around it by either creating another `lock' file and
    testing for the existance of it, or by using a specialized device
    driver to provide synchronization of different processes.  Why is
    file locking the BIG DEAL?

Well, I suppose this could be religious territory.  I suspect that one's
attitude about the necessity of MANDATORY file locking changes when one
considers an environment of, say 1000 workstations.  (Say, like us.  Say,
like I expect more and more sites to have eventually.)  I'm just not
willing to take my chances that either (a) I have no competitors for
access to the same file as I'm accessing, or (b) that every program
actually bothers to do the lock file hack or "lock call" (correctly) 
when appropriate.

    I keep hearing people say that Apollo has Unix.  They don't. It is
    an emulation.

I object to the term "emulation".  What is Unix?  It's commands (programs),
subroutine libraries, and system calls.  No, our kernel is not derived
from any standard Unix kernel.  But all our commands and much of our
library code is simply the straight Unix source code.  Is our "read"
an "emulation" because it calls some lower level interface other than
the lower level interface called by the Berkeley Unix kernel?

There is no question that if you spend your time making changes to the
Unix kernel, that you will be disappointed at the extent to which those
skills can not be transferred to the Apollo kernel.  However, I would
guess that the fraction of Unix users who change the kernel is much,
much smaller than it was 5 years ago.  Yes, this is because most Unix
users these days are doing application work, NOT kernel work.  The argument
is valid, nonetheless.

    I was under the impression that Apollo is still hiding the internals
    from everyone. Do they even have a "/dev/kmem" ?

I guess all I can say is that if you think that programs that read
"/dev/kmem" are the wave of the future, I hope you're wrong.  I believe
in procedural abstraction.

    Also, the Apollo's are very workstation based.

I'll take this criticism as a complement.

    The debugger won't run on a TTY. Other programs are also
    workstation-only based, when they shouldn't be. How do you debug
    a program over a phone line?

If this is true, it's a bug.  We certainly intend that all programs which
are critical and/or likely to be used over TTY lines should work passably
well in that environment.  Certainly, I can't imagine any reason (other
than bugs) that standard Unix programs wouldn't work over TTY lines.

                    -- Nat Mishkin
                       apollo!mishkin

Kilmer.Wbst@XEROX.COM (10/21/86)

I haven't yet seen a discussion of the Sun (110 or 160) vs Masscomp
5300P or 5400P systems. We are looking at both. A brief summary of our
impressions for advantages of Masscomp is that it is a faster machine
rated from 1.7 to 2.3 MIPS in these two configurations, has more
available slots in the lower cost models (5300P has 2, 110 has only 1),
has 2 more color planes, two frame buffers in the display system, is
more upgradable and has a good set of hardware support for graphics (fp
and fp accelerator).  However, we do have a concern over a perceived
downtime problem with the Masscomps. I would be very interested in other
impressions of this system and its comparison to Sun.


Mary Russell
Xerox Corporation

bobr@zeus.UUCP (10/21/86)

In article <30ceeabf.809c@apollo.uucp> mishkin@apollo.UUCP (Nathaniel Mishkin) writes:
>
>I object to the term "emulation".  What is Unix?  It's commands (programs),
>subroutine libraries, and system calls.  

It is also a standard file system, which has a structure and file formats
that are known in advance.  It does not have extra fields in /etc/passwd and
/etc/group, or a file structure which updates the modification date of a
file after a simple chmod, or a compatibility package which limits the
number of ptys to 16, or a C compiler which generates error messages
incompatible with standard error postprocessing techniques.  These may all
be bugs (and there are others, like malloc(), which I haven't mentioned),
but they are certainly incompatibilities between the expected UNIX
environment (BSD in my case) and the reality of the system.

>    I was under the impression that Apollo is still hiding the internals
>    from everyone. Do they even have a "/dev/kmem" ?
>
>I guess all I can say is that if you think that programs that read
>"/dev/kmem" are the wave of the future, I hope you're wrong.  I believe
>in procedural abstraction.

To some extent the issue is whether /dev/kmem exists, when it restricts the
portability of standard UNIX tools.  Obviously, the folks at Apollo have
some way to get around it, since they have an implementation of ps which
appear standard.  However, I have not yet seen any documentation about how
to get at such information as required by ps.  This means I have no
mechanism for building tools that look like "top", or "sps", or other publicly
distributed tools that I might want to have on an Apollo.

Robert Reed, Tektronix CAE Systems Division, bobr@zeus.TEK

guy@sun.UUCP (10/23/86)

> (UNIX) does not have ... a C compiler which generates error messages
> incompatible with standard error postprocessing techniques.  These may all
> be bugs...

What does "standard error postprocessing techniques" mean?  You mean
Apollo's C compiler doesn't produce PCC-style error messages?  Horrors!
Burn their compiler people at the stake!  They mustn't be allowed to get
away with such a greivous sin against the nature of UNIX!

Frankly, I wish more compilers would produce better error messages than
PCC's.  If you really want something that "error" (or whatever) can handle,
look into writing a filter that can convert their error messages into the
form you need.

> but they are certainly incompatibilities between the expected UNIX
> environment (BSD in my case) and the reality of the system.

This gives *carte blanche* to users to reject any system as "not UNIX" if
any data structure, whether intended to be exported to user code or not,
isn't identical with the system that implements your "expected UNIX
environment"; in effect, you get dangerously close to the notion that UNIX
is an implementation, rather than a command-language and C-language
interface to various services.

SunOS doesn't permit you to dereference NULL pointers, while VAX UNIX (in
4.2BSD and System V, at least) does.  Is this another such
"incompatibility"?

Some of the other items you mention could be considered bugs, assuming some
UNIX specification requires that, for instance, changing the mode of a file
will not affect its modification time.  Others, like the format of compiler
error messages, are not specified anywhere.  It may be inconvenient to you
that they aren't the same as some other implementation, but if the benefits
to Apollo's user community as a whole outweigh the inconvenience to some
die-hard UNIX hackers, so be it.

> To some extent the issue is whether /dev/kmem exists, when it restricts the
> portability of standard UNIX tools.

I presume you mean the lack of "/dev/kmem" restricts the portability of
standard UNIX tools.  Well, maybe yes, maybe no.  A lot of "standard" UNIX
tools depend on the contents of "/dev/kmem", not just its existence; there
is *not* guarantee that file table entries point to inodes, and inodes have
thus and such fields, and so forth and so on.  Will you claim that since
various machines don't have the same instruction set as the VAX, it
restricts the portability of "standard" UNIX compilers?

> Obviously, the folks at Apollo have some way to get around it, since
> they have an implementation of ps which appear standard.

Come on, it's not that hard to give something the same *user interface* as
"ps" just because you don't have "/dev/kmem".  "ps" returns information
about the processes running on the system.  It's none of the user's business
how it accomplishes this.

> However, I have not yet seen any documentation about how to get at such
> information as required by ps.  This means I have no mechanism for building
> tools that look like "top", or "sps", or other publicly distributed tools
> that I might want to have on an Apollo.

It might be nice if there were some standard way of getting at information
about the processes running on the system.  Something like the "/proc" file
system implemented on Version 8 might be a reasonable way to handle this.
Perhaps if you open "/proc/<pid>", and do some special "ioctl", it could
return some standard structure containing the information about that process
that "ps" would want to know.  (Just dumping the "proc" structure won't cut
it - there's no guarantee that a particular implementation of the UNIX
interface *has* a "proc structure" that contains all the information
needed.)
-- 
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com (or guy@sun.arpa)

guy@sun.UUCP (10/23/86)

> The framework is, I would claim, a better (dare I say "more open") thing
> to have than simply NFS.  One could build NFS support using the framework.
> One could build ISO/FTAM support with the framework.  One could build
> something that FTP'd a whole file over to your machine on open and back
> on close, if that's the best you could squeeze out of an uncooperative
> remote system.  Better that then nothing.
> 
> As far as server-side support goes, writing servers (especially stateless
> ones) that do the things that remote clients ask it to do just isn't
> that hard.

I think you'd better say "half-open".  It certainly makes life nice for
clients running on Apollos.  However, even if you publish a protocol
specification, it doesn't do anything more for *non*-Apollo clients than NFS
does; they either have to 1) rewrite their applications to use special calls
to access remote files, 2) build wrappers around their OSes file I/O calls
that use this protocol when accessing remote files, and relink all programs
that can be expected to want to access remote files with a library
containing these wrappers, or 3) stuff the remote file access code into
their kernel or some other dynamically-bound library so that existing
programs automatically get the new version of the code.

In short, what you have is a nice feature *for Apollo's OS*, not something
that magically opens up everybody's system.

I freely admit NFS doesn't do that either; you have to do some amount of
work to fit it into an arbitrary OS.  However, if you fit it into a UNIX
system using the vnode interface that comes with NFS distributions, you have
a framework that you can use to build support for other kinds of file
systems - whether remote or not - into your system.  (For example, we have
an MS-DOS file system that plugs into the interface, which you can use to
read and write MS-DOS floppies on UNIX systems.)  Yes, you have to stuff
some amount of kernel code into your system to do that, but that's due to
limitations of code based on the standard Bell Labs implementation of UNIX;
remember, the basic structure of UNIX was developed in the late '60s and
early-to-mid '70s on a PDP-11, so there was a limit on how much effort its
designers put into worrying about things like object-oriented file systems.

Comparing "extensible streams" (Apollo's object-oriented file system) with
NFS is comparing apples and oranges (or maybe knives and schoolbuses?);
better you should compare "extensible streams" with the vnode interface or
the File System Switch, or your remote file implementation with NFS, RFS,
etc..
-- 
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com (or guy@sun.arpa)

rick@seismo.CSS.GOV (Rick Adams) (10/23/86)

A major failing as far as I am concerned is that since the Apollo is
a layered/emulation/whatever instead of a "true" Unix, the
system administration commands are that of the Apollo OS, not Unix.
Eunice has this problem as well.

I don't have time to teach the operators different procedures for
different machines. "dump" should do the same basic thing on all machines.

So, you DO have a visible difference. If the difference is invisible,
(i.e. you have a program called dump that interfaces to whatever
Aegis uses) it doesn't really matter whats underneath. Just because the
library calls are basically there doesn't make it a complete emulation.

(Yes, I think system 5 backup procedures are brain dead. Guess what
else we're not considering buying because the system administration is
"different". Try doing 10 gigabytes of disk with cpio and volcopy. Then
try it with "dump".)

---rick

guy@sun.uucp (Guy Harris) (10/24/86)

> (Yes, I think system 5 backup procedures are brain dead. Guess what
> else we're not considering buying because the system administration is
> "different". Try doing 10 gigabytes of disk with cpio and volcopy. Then
> try it with "dump".)

What's especially obnoxious about this is that there *does* exist a dump
program that works much like the 4.2 "dump" and that can be made to work on
S5 - namely, the 4.1 "dump".  The main thing you'd have to do is teach it to
support both 512-byte and 1024-byte file systems.  To get something like the
4.2 "restore", you could start with the 4.1 "restor" (it's better than the
S3 one, because it maintains the "s_tfree" and "s_tinode" fields, which the
S3 one does not), possibly fold some S3isms into it (I did this a long time
ago, and I don't remember what you have to do any more), and then fold all
the nice "restore by name", "restore a directory subtree", etc. features
from the 4.2 "restore" into it.  Anybody at AT&T-IS willing to torque off
their management by doing this?
-- 
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com (or guy@sun.arpa)

bobr@zeus.UUCP (Robert Reed) (10/24/86)

In article <8422@sun.uucp> guy@sun.uucp (Guy Harris) writes:
>> (UNIX) does not have ... a C compiler which generates error messages
>> incompatible with standard error postprocessing techniques.  These may all
>> be bugs...
>
>What does "standard error postprocessing techniques" mean?

I expressed it that way because I didn't want to get into another vi/emacs
argument.  Vi users use "error" and emacs users (at least gosling/unipress)
use next-error.  In either case, it is an ad hoc standard (it has been
propogated to many machines) and it's one more hurtle to hassle (via a
filter or whatever) when migrating to the machine.

>Frankly, I wish more compilers would produce better error messages than
>PCC's.

This is principally an issue of content, not syntax (though you may argue
that syntax can produce better error messages).  I would like to see a
standard error syntax for all UNIX tools, but that's beside the point.

>> but they are certainly incompatibilities between the expected UNIX
>> environment (BSD in my case) and the reality of the system.
>
>This gives *carte blanche* to users to reject any system as "not UNIX" if
>any data structure, whether intended to be exported to user code or not,
>isn't identical with the system that implements your "expected UNIX
>environment";

Absolute rejection of an environment is one thing--evaluation of the level
of "compliance" (given that there is no objective indicator of that yet) is
another.  My point was NOT that DOMAIN/IX is not UNIX, just that though they
have made great strides towards transparency, that it is still an emulation,
subject to the differences in dark corners that any emulation risks.

>> To some extent the issue is whether /dev/kmem exists, when it restricts the
>> portability of standard UNIX tools.
>
>I presume you mean the lack of "/dev/kmem" restricts the portability of
>standard UNIX tools.  ...
>It might be nice if there were some standard way of getting at information
>about the processes running on the system.

And this is exactly the point.  I may not have a /dev/kmem whose data
structures are *exactly* the same or even nearly the same, but I can deal
with that, given sufficient documentation to make the changes necessary to
port the tools to this new environment.  UNIX environments are essentially
as closed as Apollo in this regard--/dev/kmem is no better documented than
the means by which Apollo's ps command derives its information.  If
Apollo/DOMAIN-IX came with a caveat that /dev/kmem does not exist, but the
information contained within is available through some specified mechanism,
I would be content.  But I have seen no such documentation.  We do have
sources for UNIX, though, so I can get the information I need for that
environment.  
-- 
Robert Reed, Tektronix CAE Systems Division, bobr@zeus.TEK

guy@sun.UUCP (10/27/86)

> In either case, it is an ad hoc standard (it has been propogated to many
> machines) and it's one more hurtle to hassle (via a filter or whatever)
> when migrating to the machine.

Life's a bitch, then you die.  There are plenty of problems when migrating
to a new machine, like learning that data formats, alignment requirements,
ability to get away with dereferencing null pointers, ability to sneak in
the back door of standard I/O in particular ways, etc. aren't the same on
all machines.

> This is principally an issue of content, not syntax (though you may argue
> that syntax can produce better error messages).

Yes, I would.  It might be nice for compiler error messages to point out the
precise place where the error occurred; Sun's compilers print out a token
near where the error occurred, but there may be more than one occurrence of
that token on the line in question.

> Absolute rejection of an environment is one thing--evaluation of the level
> of "compliance" (given that there is no objective indicator of that yet) is
> another.  My point was NOT that DOMAIN/IX is not UNIX, just that though they
> have made great strides towards transparency, that it is still an emulation,
> subject to the differences in dark corners that any emulation risks.

Or the differences in dark corners that many other changes in the
implementation risk.  You pays your money and you takes your choice.  Some
aspects of the implementation are considered to be just that -
characteristics of the implementation, not features of that implementation.
It might be nice if the information in *some* system data structures were
made available to applications, preferably in some fashion pretty much
independent of the details of those data structures (i.e., don't give people
access to the shared text table, because some versions of UNIX don't have a
shared text table that resembles V7's).  Some of these "dark corners" are
never going to be lit up; the vendor(s) reserve the right to make later
releases behave differently in those cases, and are perfectly justified in
doing so.

> And this is exactly the point.  I may not have a /dev/kmem whose data
> structures are *exactly* the same or even nearly the same, but I can deal
> with that, given sufficient documentation to make the changes necessary to
> port the tools to this new environment.  UNIX environments are essentially
> as closed as Apollo in this regard--/dev/kmem is no better documented than
> the means by which Apollo's ps command derives its information.  If
> Apollo/DOMAIN-IX came with a caveat that /dev/kmem does not exist, but the
> information contained within is available through some specified mechanism,
> I would be content.  But I have seen no such documentation.  We do have
> sources for UNIX, though, so I can get the information I need for that
> environment.  

You have sources for some versions of UNIX.  For other versions, you may
only have some include files.  You may be able to guess how the data
structures work, but you may end up guessing incorrectly.
-- 
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com (or guy@sun.arpa)

mangler@cit-vax.Caltech.Edu (System Mangler) (10/27/86)

In article <8501@sun.uucp>, guy@sun.uucp (Guy Harris) writes:
> The main thing you'd have to do is teach it [4.1bsd dump] to
> support both 512-byte and 1024-byte file systems.

First, remove the fstab stuff.	Sys V has no /etc/fstab.

Next, realize that daddr's are in units of the filesystem blocksize
(e.g. 1024) but bread() and the tape routines use different units.
Conversion factors creep in everywhere.

You have to decide whether TP_BSIZE is 512 or 1024.  Making it BSIZE
(512) makes dumpdir happy, but then it takes two spclrec's to hold
each indirect block from a 1K filesystem (harmless but ugly).

The dumprestor.h supplied with Sys V defines NTREC as 20, which is
too large for the 3B20's UN52 tape controller (limited to blocksizes
of 6 KB or less).

If TP_BSIZE is smaller than the blocksize of the filesystem being
restor'd, bmap() needs to return the same pointer for both 512-byte
halves instead of zapping the pointer and allocating 1K block.	Be
sure to zap the first block pointer, which got copied from the tape
inode just in case the file was character or block special (better
yet, only copy that block pointer if the file is actually special).

Lastly, integrate the 4.3bsd dump speedups, which already understand
about variable blocksizes.  Main difficulty is recovery from the 4.1bsd
interrupt trap routine (signals vs. pipes *sigh*); once you get that
right, the interprocess synchronization is easy to convert to signals.

Despite only being able to read 1K at a time, full dumps are nearly
the same speed as volcopy.  The UN52 tape controller simply won't
go faster than 80 KB/s.  Of course incremental dumps beat the tar
out of cpio.

> Anybody at AT&T-IS willing to torque off their management by doing this?

(I got pretty torqued off at having to do this myself, and so did my
management).  If adopting a 4.1bsd program is going to torque off the
AT&T-IS management, start from the Sys III version!  The differences
are largely cosmetic.

Don Speck   speck@vlsi.caltech.edu  {seismo,rutgers}!cit-vax!speck

giebelhaus@umn-cs.arpa (10/28/86)

I am glad to see a discussion of this.  I think that anyone who is 
buying workstations today has some difficult issues to work through.
I still work through them often.  I hope this discussion helps other
people as much as it helps me.

I'll mostly reply to Bruce Barnett here, but I will add some comments
of other people that have posted about this subject also.  That is,
all quoted comments are Bruce Barnett's except those marked otherwise.

Barnett Bruce G <barnettb%vdsvax.uucp@BRL.ARPA> writes:

>Apollo's do have some advantage in Programming in the Large.
>Especially for Apollo-based end-user software environments.

Yep, because Apollo has a lot of nice extra tools that you don't
find in "standard" unix.

>One thing I don't like about Apollo is the intention to `lock you'
>into an Apollo domain (pun intented :-).
>Things like their pascal (which has been enhanced so much it looks like C),
>backplane, operating system, windowing and networking ...

Please tell me how Apollo locks you in more than Sun.  I hope it is not
because their are extra tools on the Apollo or that you want to mess with
/dev/kmem.

>Apollo supports standards, when they have to. It is a very different
>philosophy.

Apollo had a good thing and didn't put it up as a standard.  Maybe it
was a mistake and maybe it wasn't.  Let me quote another part of the 
article about not putting stuff up as public domain, though.
>          Maybe they are careful about the internals. But If I had a system
>          that TRULY supported diskless clients, I would not want to give away
>          the technology for free. 
I'm sure this is what Apollo must have been thinking.  I wonder how things
would have turned out if Apollo opened their Aegis stuff up as a standard
long ago.

>Disconnecting Apollo's Ring network causes a few problems, too :-)

I'm not talking about breaking the network, I am talking about when
your server goes down.  William Stallings has a book called "Local Networks"
that gives a good comparison of the ring and bus.  I think you will find
the way Apollo recommends you implements the ring more reliable, able to
take a heavier load, and certainly easier to debug than an ethernet bus.

>I have had a server supporting 17 diskless Sun 3/75's go down (power fail)
>and auto-boot several times, without serious problems.
>I did have to run fsck on the client root file system once or twice,
>but most of the time the users just waited a few minutes, and voila!
>they continued, with NO WORK LOST!

I'm afraid you have completely missed the point.  Apollo's diagnostics often
keep the machines from going down to begin with (of course not in the case
of a power outage).  Most of the time, I am warned of a failing component
before it has a chance to do any harm.

>Why did you have to edit fstab, unless to bring up a replacement
>file server? I suppose you would have to if you wanted to
>have another server provide the /usr/bin /bin and /ucb/bin directories.
>Or the swapping partitions (this is ND, not NFS by the way).

Yes, I want to boot off an alternate server.  I don't want to have to 
have 10 nodes down every time my server dies.  I know machines don't 
die that often, but they do die.

>I have heard from some Apollo people that they will NEVER support NFS.
>Also, I believe they have demonstrated some limited type of NFS remote mount,
>but this is not the same thing.
[...]
>Because it isn't a product sponsored by their competitor.
[...]
>Again, what reliability problem are you referring too?
>A stateless implementation is a lot more reliable than a statefull
>implementation.
And also from Guy Harris
>(I was a bit confused by his comment here; if Apollo hasn't released this
>alleged NFS implementation, how can you get it?)

I don't know if there is anything official yet, but I'll bet anyone 
who wants to take me up on it a beer that Apollo will offer NFS as a 
product before the end of the year.  As I hear the rumor, there is a 
royalty problem with Sun in providing NFS.  I seem to remember another
vendor (possibly Piramid) having much the same problem.  

As for the already available stuff, perhaps this is the one that Bruce
said he saw but was unimpressed with.  I heard of it but did not pursue
it as I don't want the administrative headaches of no file locking and
keeping up NFS (fstab files and such).

I guess NFS is one of those things that I may not think much of, but am
going to have to get used to.  I'm glad to see that Sun is putting
some file locking in it.  This makes it so it isn't stateless anymore, 
doesn't it?  Perhaps NFS will look like what RFS will look like (some
people have accused me of being an optimist at times).

Nathaniel Mishkin explained some about file locking so I won't go
into it again.  Sun must think it is important or they wouldn't have
added it to NFS, though.

>Not nearly as different as Domain/IX is from ANY Unix machine.
>I keep hearing people say that Apollo has Unix.
>
>They don't. It is an emulation.

As was said by paul@umix.uucp and Nathaniel Mishkin, Apollo's UNIX is
not an emulation.  Apollo has a non standard kernel and so does Sun.
I hold that if DOMAIN/IX is an emulation, SunOS is an emulation.

>You cannot buy a board that comes with a Unix device driver and install it
>in an Apollo. The internals of Domain/AEGIS are different.

As Guy Harris put it, you can't really do this on Sun either.  I
think that Sun is closer to running on a VAX in this way, but this 
does not necessarily make it better or easier.  I personally don't 
believe in having to recompile the kernel in order to add a device.

>Apollo provides a package that
>allows users to interface to their hardware, but this is a rigidly
>controlled interface. It may be easier to use, but is slower than a
>real device driver (the interrupt latency time is very high), and non-portable.

Yes, I like rigidly controlled interfaces.  Then again, I like strongly
typed languages like Ada.  If unix were completely rewritten in Ada using
"modern" software engineering techniques, I would be thrilled.

>Sun's have a standard architecture. Any Sun can have a TTY or a graphics
>device as a master console. Most graphics software is layered
>over a TTY tool. Examples are MAILTOOL and DBXTOOL, along with the
>chess and backgammon demo's.

I'm afraid I don't understand the above.  What do you mean by "standard
architecture" and how is it different than what Apollo has?  The way I
see it, Apollo meets this better.  On the Apollos, you don't have to 
worry about what CPU you are running; there are not separate directories
and programs for 68010 and 68020 cpus.  Any Apollo object will run on
any Apollo (unless it was specifically optimized for a particular CPU).

>And the system administration on Aegix is not UNIX.
>As I recall, scripts that add a user to the password file
>don't work the same way (or at all).

The password files are there.  They are standard bsd.  There is an
easier way to use them than to edit them, but they are certainly there.

>I admit that there are differences between SYS V, 4.2 bsd and 4.3 bsd.
>But there are a LOT of books available for people who need help.
>And you can go to seminars on system administration.
>But Apollos's administration is nothing like the others.
>In some cases is it probably easier. But different.

And SunOS system administration is different, also.  The non standard
parts of the Sun system administration are a lot more difficult for
me than the non standard parts of the Apollo system administration.

>You can take courses on Unix. From several people besides AT&T.
>And you can get books on the internals (like Bach's DESIGN ON THE UNIX
>OPERATING SYSTEM).

Funny, I never had to take a class on how to sys admin an Apollo.  I 
believe I am considered pretty good at sys administrating Apollos, too.
I did buy Bach's book.  When I get a chance, I'll read it.  I have
also taken some UNIX classes.

>Are you saying that Sun's need a server, while Apollo's don't?
>I don't understand.
[...]
>It makes a bigger difference when you have 12-15 diskless nodes per server.
>Sun has invested in diskless node technology. It works.
>I am not convinced that Apollo has done the same. Maybe they can demonstrate
>a diskless node or two, but I have heard of diskless benchmarks that
>show ~7 diskless Sun's having a performance degradation equivalent
>to 1 or 2 diskless Apollo's. And it seems to me that Sun's are usually
>faster for the same price as an Apollo, so the price/performace makes
>a BIG difference.  I would think that when you compare a 10-15 station
>cluster, Sun's would give you much faster machines at a much cheaper price.

I believe the Apollo topology can be much more reliable and fast for about
the same price.  Guy Harris is right in what I was recommending for a 
topology.  Every other node is a file server.  That is, half the nodes
are disked and half are diskless.  These are not SCSI bus disks either.
Especially with the new disks Apollo has just started using, the disked
nodes have much faster disk access than Suns or Apollos accessing files 
over DOMAIN or ethernet.  This is especially important in AI.  This is 
about the same cost of buying an equal amount of Suns with servers.

I would like to know where you got your benchmarks for diskless performance.

> Maybe you get what you pay for? :-)

If I thought I had to pay more money for quality, perhaps I would be 
looking at IBM RTs with some mainframes.  :-)

>The prices are similar to the sources for other versions of Unix.

Not Apollo or bsd source.  They are free and $400 respectively.

>          This is a very strange comment. Don't you know about the Sys V/bsd
>          merging? Also, see my previous comments.
I must not have been clear here.  I'll include Guy Harris's comments here
too, and answer them both at the same time.  Of course, Guy was not able
to see my all of my original text first; maybe it wouldn't have made a
difference anyway.
>It is not only strange, but uninformed.  There are several people from Sun's
>system software group working with the IEEE committee.  We have made joint
>proposals with HP on terminal driver and reliable signals issues.  We might
>not let them look at our source code, but some of them may work for
>vendors that don't have UNIX source licenses, and AT&T wouldn't *let* us let
>them look at it.  Does the person who makes this claim have examples of
>cases where we *didn't* let the 1003.1 committee look at our stuff?

I am not convinced that 4.2 and Sys V are merging.  I see no
commitment from AT&T to merge 4.2 into V resulting in something
looking much like what Sun has.  I hope that V picks up a lot
of what is in 4.2, but I don't expect the new V to look like SunOS.
To begin with, Sun has said that they will always be 4.2 based.  While it
may be possible to have a V looking interface with 4.2, I question Sun's
committment to standards in saying that they will stay with 4.2 no
matter what (by 4.2, I'm sure they meant what ever the latest bsd release is).

Last I checked Sun was not licensing or distributing their SunOS system
to anyone.  The merge that Sun has made looks to me that it will be
incompatible with all other versions of UNIX.  I could go on for a while
with this, but I'll just sum it up with this:
SunOS is a version of UNIX that is unique to Sun.
SunOS is privately developed.
SunOS is does not have it's technology sublicensed.
SunOS is not publicly documented.
SunOS has no firm commitment from AT&T.
SunOS is not reviewed by the IEEE P1003.
If anything, SunOS looks more questionable to me than DOMAIN/IX.

I am glad to see that Sun has some people on the IEEE committee.
My verbiage could have been better when I said that Sun "haven't let
the IEEE committee for UNIX look at their stuff."  I didn't mean to say
that Sun had denied the IEEE.  I don't know for a fact whether Sun has
denied the IEEE.  Maybe it is the case that the IEEE hasn't asked to
see it.

By proprietary I mean that Sun is not making parts of it's operating
system available.

Along the same line, Guy Harris writes:
>AMEN.  The IEEE committee is *not* developing an implementation
>specification, it is developing an interface specification. 

I hope I did not imply otherwise.  I don't think an implementation
specification standard could work. 

>          But I feel that the difference in the companies
>          is primarily in attitude. And I don't like Apollo's attitude.

I feel they have pretty different attitudes, also.  Sun seems to tell me
to do it myself.  I could get almost the same level of software support
if I bought bsd from Berkley.  That is, close to none.  Hardware support
is the same.  I had their 24 hour mail in service and it took over a week.
I consider service 7 times worse than promised pretty bad.  After a few
bad experiences like that, I stopped using Sun's service.

Apollo is the other way around.  Out of Vaxes, Symbolics, Honeywell, 
Bridge, and a few others, I find one gets the best software and hardware 
service on the Apollos.  

Sales support is the same story.  Sun gives nice little shows and does a
wonderful job with their image, but just try to get them to follow a 
delivery schedule.  Apollo has only failed to me the delivery schedule
once, but that was on a special order the salesman rushed for me.  Even
then, it was only a few days late instead of months late like I have 
from Sun.  Sun really got under my skin when our last set of machines
were late.  The delivery date passed without word from the salesman.
When we called the salesman, he either was not willing to or could not
find a new date.

For a university full of cheap labor, one may not need quite so much
support, but I don't have time for this kind of stuff.  I get people 
quoting dollar losses to me when these problems happen.  And then they
wonder why I would rather buy Apollos than Suns.

Robert Reed writes:
>It is also a standard file system, which has a structure and file formats
>that are known in advance.  It does not have extra fields in /etc/passwd and
>/etc/group, [...] or a C compiler which generates error messages
>incompatible with standard error postprocessing techniques.  

What is this stuff on extra fields in /etc/passwd and such.  It is not on
my machine.  The C compiler is different.  I find it much better than the
cc on the Suns except for the one thing that Robert pointed out.  I'm
not sure about the rest of the things he has mentioned.  I have not 
noticed anyone having a problem with malloc on any of our Apollos yet.

Robert Reed also writes:
>To some extent the issue is whether /dev/kmem exists, when it restricts the
>portability of standard UNIX tools.  Obviously, the folks at Apollo have
>some way to get around it, since they have an implementation of ps which
>appear standard.  However, I have not yet seen any documentation about how
>to get at such information as required by ps.  This means I have no
>mechanism for building tools that look like "top", or "sps", or other publicly
>distributed tools that I might want to have on an Apollo.

I don't think it is very structured to have to go into /dev/kmem.  I have
not look for the routines Apollo uses for getting the process names and
such, but if it is not available, it should be.  Instead of reading
memory directly, I believe an operating system should provide functions
and procedures.  

Giebelhaus@hi-multics.arpa
ihnp4!umn-cs!hi-csc!giebelhaus

barnettb@vdsvax.uucp (Barnett Bruce G) (10/29/86)

In article <173@umix.UUCP> paul@umix.UUCP ('da Kingfish) writes:

>The idea that Apollo's Unix is an "emulation" or a "layered product"
>(like Eunice, for example) is really getting to be tiresome, and is a
>canard.  

I sorry you feel that I am spreading false and malicious reports.
But I have tried to be as accurate as I can be, especially since I haven't
had an Apollo machine in front of me for since the beta release of Domain IX.
Because of this, if I have erred, I have tried to err in favor of Apollo.
(But I can always count on the Net to correct my errors :-). 

But I cannot think of a better word than emulation to describe a software 
product that:

	1. Is optional. (or was at the time)
	2. Is layered on top of the native operating system (AEGIS).
	3. Is missing functionality that every version of Unix has.
	   (Functionality in an area I consider essential)
	4. That does not behave EXACTLY the same way as Unix does.
	   (I realize that unixes differ, but see below).
	5. That handles system administration information in a form unlike
	   any other version of Unix.
	6. That is derived from a proprietary operating system instead
	   of a standard AT&T release.

I know it's tiresome, but I am tired of people who tell me Aegis/AUX IS
REAL Unix. How can it be? 

Now emulations have good and bad points. Some functions are faster,
some are slower. Some system administration tasks become easier, some harder.
Apollo's is most likely the best emulation around.
But then, they have the ability to change the underlying operating system, 
when necessary.

My main point is that I do know KNOW where the areas of incompatibility
exist. I know they are there. If I believed people who told me that Aegis 
WAS Unix, I could get myself into trouble.

One of the most impressive features of Unix is the ability to monitor,
tune, and extend the operating system, EVEN IF YOU DON'T HAVE THE SOURCE
CODE!

For instance, if I wanted to write a program that would tell me what files were
currently being accessed, and what disk they were on, 
whether they were locked, being waited for, been modified, 
is shared by more than one process, the inode number, size, etc. etc.
then it would be a trivial task. I wrote such a program when I went to
AT&T's course on Unix Internals (Excellent course! I KNEW all those
strange files is /usr/include/sys were good for something! :-)

I would open file `/vmunix' (or `unix' or ...), use nlist(3) to locate the
whereabouts of the system data structures, then open and seek `/dev/kmem'
and using the data structures in /usr/include/sys/..., get the information
I need.
But AUX doesn't support nlist(3) or `/dev/kmem', so this can't be done.

-->		AND I WOULDN'T KNOW THIS UNTIL I TRIED. <--

The point is, if someone is using a particular technique on a UNIX system,
than I KNOW it can be done on a Sun or Vax. Also, if someone develops
some clever technique (like device drivers that can be re-loaded dynamically,
i.e. without rebuilding the operating system), it will most likely be developed
on a Sun or Vax. 

I LIKE this feeling of confidence. I LIKE being able to use programs like
William LeFebvre's top(1). I know UNIX and know that I can write some
very powerful programs if the need comes up.

I don't feel comfortable that a unix-look-alike will behave the same
way as a UNIX operating system. I would never know when a program
off of USENET would work, and when it wouldn't. I like being able to use
the latest techniques discussed at UNIX conventions.

Please don't tell me DOMAIN/AEGIS/AUX is REAL unix. I'm tired too.

>As I mentioned above, they do
>have a different kernel, with their own OS interface.  That can be
>risky to do, when other vendors trade on the safety of adherence to the
>beliefs of the past, and other signs of orthodoxy, like /dev/kmem.
>
>Apollo does "hide" some internals, in that you can't grope through kmem.
>Sometimes information hiding is OK, and in some contexts that is considered
>to be desirable.
.
.
>And there is no /dev/kmem.  (Although I think that is
>definitely a step in the right direction ... i.e., the 1980s.)

Like 1984? (Sorry, couldn't resist. :-)
But seriously, how can it be desirable to not have /dev/kmem?
You could always prevent access from everyone except root.
You could even delete the file, I guess, and change/delete all programs that
access it. Or you could fix the location of every data structure, and never
change the size/location of the structures in the operating system.
But *IF* they change, you have no method of verifying the *real* locations.

Now, I'm not disagreeing that there are times when hiding information
is desirable. But I can't see your point that eliminating it 
for EVERYBODY is a step in the right direction. 

It seems to me to be a step backwards.

>paul@umix.cc.umich.edu
-
Bruce Barnett
-someone who never believes managers who say: `We will never port this product' 

mark@umcp-cs.UUCP (Mark Weiser) (10/29/86)

In article <4956@brl-smoke.ARPA> hi-csc!giebelhaus@umn-cs.arpa (Timothy R. Giebelhaus) writes:
>
>>Not nearly as different as Domain/IX is from ANY Unix machine.
>>I keep hearing people say that Apollo has Unix.
>>
>>They don't. It is an emulation.
>
>As was said by paul@umix.uucp and Nathaniel Mishkin, Apollo's UNIX is
>not an emulation.  Apollo has a non standard kernel and so does Sun.
>I hold that if DOMAIN/IX is an emulation, SunOS is an emulation.
>
>Giebelhaus@hi-multics.arpa
>ihnp4!umn-cs!hi-csc!giebelhaus

There are several other comments throughout Timothy's message
similar to the above.  But there is a difference between being a
little non-standard and wildly totally different.  The biggest
difference in the Sun kernel is in the windowing system device
driver, and associated kernel code.  Outside this realm, things
are pretty normal right down to the #ifdef VAX's.  Things are
standard enough inside the Sun kernel that I can take interesting
kernel hacks, like SLIP, and just about drop them in (modulo Sun's
silly bugs in their ioctl code).  A student of mine even took the
entire 4.3 networking code, XNS and all, and fitted it into the
Sun 2.0 release.  It more or less dropped in.  The sun kernel is
sufficiently standard that one can benefit from net bug reports,
code postings, and general unix expertise in people.  This is a
TREMENDOUS advantage.  I believe that a large part of Unix's success
is that it was open and available for people to play with, learn
from, try porting to other machines, build community around.  There
is no such community around Aegis internals, except within Apollo.
-mark
-- 
Spoken: Mark Weiser 	ARPA:	mark@maryland	Phone: +1-301-454-7817
CSNet:	mark@umcp-cs 	UUCP:	{seismo,allegra}!umcp-cs!mark
USPS: Computer Science Dept., University of Maryland, College Park, MD 20742

csg@pyramid.UUCP (Carl S. Gutekunst) (10/31/86)

In article <4956@brl-smoke.ARPA> hi-csc!giebelhaus@umn-cs.arpa (Timothy R. Giebelhaus) writes:
>I don't know if there is anything official yet, but I'll bet anyone 
>who wants to take me up on it a beer that Apollo will offer NFS as a 
>product before the end of the year.  As I hear the rumor, there is a 
>royalty problem with Sun in providing NFS.  I seem to remember another
>vendor (possibly Piramid) having much the same problem.  

Nope. Pyramid was the very first vendor to sign up for NFS; there were never
royalty problems. Working with Sun has always been a pleasure; the company is
much too self confident to stoop to petty royalty squabbles.

An employee of Integrated Solutions Inc., a 680x0 workstation manufacturer in
San Jose, did grumble on the net on what they thought were interminable delays
in getting their NFS source. Not the license, but the physical release tape.
The implication was that Sun was stalling. I checked on that myself, and found
that the delays were legitimate. 

Sun has covered their investment another way, by refusing to license ND. This
means that Sun diskless workstations must have a Sun file server, at least for
boot and swapping. But Sun has already indicated that they will be phasing out
ND, in favor of giving NFS the necessary capabilities. I'm assuming -- I don't
know -- that Sun will license out the enhancements to NFS as they always have.

(Anyone for a diskless Pyramid? Sure.... :-) )

DISCLAIMER: These are entirely my own opinions, having nothing to do with
anything....

<csg>

I suppose that any society that has been as naughty as ours has been lately
can use a little hysterical paranoid fascism. What's to lose?

paul@umix.UUCP ('da Kingfish) (11/04/86)

In article <959@kbsvax.steinmetz.UUCP> barnettb@steinmetz.UUCP (Barnett Bruce G) writes:

>I know it's tiresome, but I am tired of people who tell me Aegis/AUX IS
>REAL Unix. How can it be? 

Ummm, I don't think Apollo has anything called AUX anymore.  They did
at one time, and it wasn't that hot, admittedly.  It certainly seems
that memories of AUX in some users' minds have become a real albatross
around Apollo's neck, and I hope people at least get the idea Apollo's
Unix product has matured and improved greatly since then.  It is at least
worth a fresh evaluation.

>
>For instance, if I wanted to write a program that would tell me what files were
>currently being accessed, and what disk they were on, 
...
>But AUX doesn't support nlist(3) or `/dev/kmem', so this can't be done.

It can't be done using nlist, that's right.  Apollo has a command that
lists locked objects, so I imagine that info is available somewhere.

>The point is, if someone is using a particular technique on a UNIX system,
>than I KNOW it can be done on a Sun or Vax. Also, if someone develops
>some clever technique (like device drivers that can be re-loaded dynamically,
>i.e. without rebuilding the operating system), it will most likely be developed
>on a Sun or Vax. 

Except they did it on an Apollo.

>I LIKE this feeling of confidence. I LIKE being able to use programs like
>William LeFebvre's top(1). I know UNIX and know that I can write some
>very powerful programs if the need comes up.
When you do, send them to me so I can run them on an Apollo. -:)  One
thing that is apparent is that if one requires a /dev/kmem, one won't
be happy without it.  For the things I do, I can live without it.

>Bruce Barnett

I hope those fellers at CMU don't make Mach (or whatever) too different!
Sorry (kind of) for continuing the verbosity here.

--paul