[comp.unix.admin] Killer Micro Question

randy@ms.uky.edu (Randy Appleton) (11/14/90)

I have been wondering how hard it would be to set up several 
of the new fast workstations as one big Mainframe.  For instance,
imagine some SPARCstations/DECstations set up in a row, and called
compute servers.  Each one could handle several users editing/compiling/
debugging on glass TTY's, or maybe one user running X.

But how does each user, who is about to log in, know which machine to
log into?  He ought to log into the one with the lowest load average, yet
without logging on cannot determine which one that is.

What would be nice is to have some software running at each machine, maybe
inside of rlogind or maybe not, that would take a login request, and check
to see if the current machine is the least loaded machine.  If so, accept
the login, else re-route it to that machine.

It would not be good to have each packet come in over the company ethernet, 
and then get sent back over the ethernet to the correct machine.  That would 
slow down the machine doing the re-sending, causes un-needed delays in the
turn-around, and clog the ethernet.

Also, all of these machines should mount the same files (over NSF or
some such thing), so as to presurve the illusion that this is one big
computer, and not just many small ones.  But still, it would be best 
to keep such packet sof the company net.

One solution that directs logins to the least loaded machine, and keeps
the network traffic shown to the outside world down, is this one:

   Company Ethernet
--------------------------+------------------------------
                          |         
                          |  ---------------
                          ---| Login Server|----   ----------
                             ---------------   |  | Server1 |
                                    |          |-------------
                                 -------       |   ----------
                                 | Disk|       |---| Server2 |
                                 ------        |   ----------
                                               |        .
                                               |        .
                                               |   ----------
                                               |--| Server N|
                                                  ----------

The idea is that as each person logs into the Login Server, their login 
shell is acually a process that looks for the least loaded Server, and 
rlogin's them to there.  This should distribute the load evenly 
(well, semi-evenly) on all the servers.  Also, the login server could have 
all the disks, and the others would mount them, so that no matter what node
you got (which ought to be invisible) you saw the same files.

The advantage is that this seup should be able to deliver a fair 
number of MIPS to a large number of users at very low cost.  Ten SPARC
servers results in a 100 MIPS machine (give or take) and at University
pricing, is only about $30,000 (plus disks).  Compare that to the price
of a comparabe sized IBM or DEC!

So my questions is, do you think this would work?  How well do you
think this would work?  Do you think the network delays would be
excessive?

-Thanks
-Randy

P.S. I've sent this to several groups, so feel free to edit the
'Newsgroups' line.  But please keep one of comp.arch or comp.unix.large,
cause I read those.
-- 
=============================================================================
My feelings on George Bush's promises:
	"You have just exceeded the gulibility threshold!"
============================================Randy@ms.uky.edu==================

rcpieter@svin02.info.win.tue.nl (Tiggr) (11/14/90)

randy@ms.uky.edu (Randy Appleton) writes:

>But how does each user, who is about to log in, know which machine to
>log into?  He ought to log into the one with the lowest load average, yet
>without logging on cannot determine which one that is.

Andrew Tanenbaum et al solved this problem by developing Amoeba.  And
far more transparant than logging in onto the server with the lowest
load average.

Tiggr

tarcea@vela.acs.oakland.edu (Glenn Tarcea) (11/14/90)

In article <16364@s.ms.uky.edu> randy@ms.uky.edu (Randy Appleton) writes:

#I have been wondering how hard it would be to set up several 
#of the new fast workstations as one big Mainframe.  For instance,
#imagine some SPARCstations/DECstations set up in a row, and called
#compute servers.  Each one could handle several users editing/compiling/
#debugging on glass TTY's, or maybe one user running X.
#
#But how does each user, who is about to log in, know which machine to
#log into?  He ought to log into the one with the lowest load average, yet
#without logging on cannot determine which one that is.
#

  This is not a direct answer to you question, but it may have some merit.
It sounds to me that what you are talking about is a lot like a VAXcluster.
I have often thought it would be nice to be able to cluster U**X systems
together. NFS is a nice utility, but it isn't quite what I am looking for.
  I also find it interesting that IBM has decided to go with the clustering
concept for their mainframes. Although it seems to me it would be a lot
cheaper and better for the customer to buy 10 $30,000 workstations and
cluster them together, rather than 3 $22,000,000 mainframes (run yeck! MVS)
and cluster them together.
  Perhaps if DEC can get a POSIX compliant VMS out we will be able to
cluster "U**X" systems.


  -- glenn

gdtltr@brahms.udel.edu (Gary D Duzan) (11/14/90)

In article <3849@vela.acs.oakland.edu> tarcea@vela.acs.oakland.edu (Glenn Tarcea) writes:
=>In article <16364@s.ms.uky.edu> randy@ms.uky.edu (Randy Appleton) writes:
=>
=>#I have been wondering how hard it would be to set up several 
=>#of the new fast workstations as one big Mainframe.  For instance,
=>#imagine some SPARCstations/DECstations set up in a row, and called
=>#compute servers.  Each one could handle several users editing/compiling/
=>#debugging on glass TTY's, or maybe one user running X.
=>#
=>#But how does each user, who is about to log in, know which machine to
=>#log into?  He ought to log into the one with the lowest load average, yet
=>#without logging on cannot determine which one that is.
=>#
=>
=>  This is not a direct answer to you question, but it may have some merit.
=>It sounds to me that what you are talking about is a lot like a VAXcluster.
=>I have often thought it would be nice to be able to cluster U**X systems
=>together. NFS is a nice utility, but it isn't quite what I am looking for.

   You might want to look into what research has been done in the area
of Distributed Operating Systems. Some of these use Unix(tm) and others
don't. My personal favorite (from reading up on it) is Amoeba, developed
by a group at VU in Amsterdam (including Dr. Tanenbaum, known for several
textbooks and Minix). It was build from the ground up, so it is not Unix,
but one of the first things built on top of it was a Unix system call
server which allowed for fairly easy porting of software.
   Anyway, many (if not most) DOS's will support some form of automatic
load balancing and possibly process migration (a hard thing to do in
general). As for which CPU to log onto, it doesn't matter; the DOS makes
the whole thing look like a single, large machine.
   For a more immediate solution, I put together a simple Minimum Load
Login server on a Sun 3 network here at the U of D (a real kludge, I
must say). To use it, one would "telnet fudge.it.udel.edu 4242". On
that port I have a program that searches a list of 3/60's and parses
a "rup" to each one. It then tosses packets between the caller's port
and the least loaded CPU's telnet port. This is a horrible, ugly solution,
but it works, and one can always just kill the original telnet and
manually start a telnet to the machine selected by the MLL daemon.

                                        Gary Duzan
                                        Time  Lord
                                    Third Regeneration



-- 
                            gdtltr@brahms.udel.edu
   _o_                      ----------------------                        _o_
 [|o o|]        An isolated computer is a terribly lonely thing.        [|o o|]
  |_O_|         "Don't listen to me; I never do." -- Doctor Who          |_O_|

mpledger@cti1.UUCP (Mark Pledger) (11/14/90)

Isn't this what symmetrical multiprocessing is all about ?



-- 
Sincerely,


Mark Pledger

--------------------------------------------------------------------------
CTI                              |              (703) 685-5434 [voice]
2121 Crystal Drive               |              (703) 685-7022 [fax]
Suite 103                        |              
Arlington, DC  22202             |              mpledger@cti.com
--------------------------------------------------------------------------

rickert@mp.cs.niu.edu (Neil Rickert) (11/14/90)

In article <3849@vela.acs.oakland.edu> tarcea@vela.acs.oakland.edu (Glenn Tarcea) writes:
>
>  I also find it interesting that IBM has decided to go with the clustering
>concept for their mainframes. Although it seems to me it would be a lot
>cheaper and better for the customer to buy 10 $30,000 workstations and
>cluster them together, rather than 3 $22,000,000 mainframes (run yeck! MVS)
>and cluster them together.

 Suppose you wanted a system to manage huge databases.  You needed strong
integrity controls for concurrent database updates.  You needed to access the
data in a huge room packed to the gills with disk drives.  You needed to be
able to access the same data from any CPU in the system.  You couldn't
tolerate the performance hit of the bottleneck caused by pumping all the data
down an ethernet.

 You just might find the mainframes a better solution than the workstations.

 IBM didn't get that big by ignoring its customers' needs and forcing them to
buy an excessively expensive and underperforming system.  Instead they carefully
monitored those needs, and evolved their hardware and software to meet them.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115.                                  +1-815-753-6940

suitti@ima.isc.com (Stephen Uitti) (11/15/90)

In article <16364@s.ms.uky.edu> randy@ms.uky.edu (Randy Appleton) writes:
>I have been wondering how hard it would be to set up several 
>of the new fast workstations as one big Mainframe.  For instance,
>imagine some SPARCstations/DECstations set up in a row, and called
>compute servers.  Each one could handle several users editing/compiling/
>debugging on glass TTY's, or maybe one user running X.

Harvard did this with slower workstations, a few years ago.  The
first step was to set up several machines to look like each other
as much as possible.  NFS was used.

>But how does each user, who is about to log in, know which machine to
>log into?  He ought to log into the one with the lowest load average, yet
>without logging on cannot determine which one that is.

These machines were accessed via serial terminals.  The tty lines
came in on a terminal switch.  The initial arrangement was to have
a "class" connect you to a random machine.  Each machine would have
several users at any given time, and the loads tended to be pretty
similar.

Dan's "queued" utility was used to provide some load balancing.
This system allows you to arrange for individual programs to be
executed elsewhere dependent on load average, or whatever.  This
package has been posted to source groups, and probably is
available via ftp.

However, even this doesn't distribute the load of user keystrokes
causing interrupts and context switches, especially in editors.
There was some talk of making more sophisticated use of the
terminal switch.  There was a control port to the switch.  If
this were connected to a host somewhere, then one could arrange
for the next connection to be made to a particular machine.

>What would be nice is to have some software running at each machine, maybe
>inside of rlogind or maybe not, that would take a login request, and check
>to see if the current machine is the least loaded machine.  If so, accept
>the login, else re-route it to that machine.

It could be done in rlogin - check the rwho (ruptime) database.
If you have sources, this isn't that hard.

>It would not be good to have each packet come in over the company ethernet, 
>and then get sent back over the ethernet to the correct machine.  That would 
>slow down the machine doing the re-sending, causes un-needed delays in the
>turn-around, and clog the ethernet.

Packet forwarding isn't that bad.  The overhead is low compared
to, say, NFS.

>Also, all of these machines should mount the same files (over NSF or
>some such thing), so as to preserve the illusion that this is one big
>computer, and not just many small ones.  But still, it would be best 
>to keep such packets off the company net.

An ethernet bridge can help here.  Isolate the hog.

                +--------+
A --------------+ bridge +------------- B
                +--------+

hosts on either side of the bridge can talk to each other as if
they were on the same wire.  Packets on the A side bound to hosts
on the A side aren't forwarded to the B side (and the same is true
on the other side).  Broadcast packets are always forwarded.

>One solution that directs logins to the least loaded machine, and keeps
>the network traffic shown to the outside world down, is this one:
>
>   Company Ethernet
>--------------------------+------------------------------
>                          |         
>                          |  ---------------
>                          ---| Login Server|----   ----------
>                             ---------------   |  | Server1 |
>                                    |          |-------------
>                                 -------       |   ----------
>                                 | Disk|       |---| Server2 |
>                                 ------        |   ----------
>                                               |        .
>                                               |        .
>                                               |   ----------
>                                               |--| Server N|
>                                                  ----------
>
>The idea is that as each person logs into the Login Server, their login 
>shell is actually a process that looks for the least loaded Server, and 
>rlogin's them to there.  This should distribute the load evenly 
>(well, semi-evenly) on all the servers.  Also, the login server could have 
>all the disks, and the others would mount them, so that no matter what node
>you got (which ought to be invisible) you saw the same files.

What I don't understand is how you intend to implement this.
Is the Login Server a general purpose system, or something that
serves terminals?  You can get boxes that hook into ethernet
that do this.

>The advantage is that this setup should be able to deliver a fair 
>number of MIPS to a large number of users at very low cost.  Ten SPARC
>servers results in a 100 MIPS machine (give or take) and at University
>pricing, is only about $30,000 (plus disks).  Compare that to the price
>of a comparabe sized IBM or DEC!

Plus disks?  Without any I/O of any kind, 100 MIPS is of very
little use.  For $3K, you can get a 486 box of similar
performance.  For an extra $500, maybe they'll give you one
without RAM.  Seriously, adding up the costs for the naked CPU
chips gets you nowhere.  Add up the whole systems costs, and
compare system features.  For example, having one server and
three diskless nodes may be slower and more expensive than three
diskfull nodes - even though the nodes are the same in both systems.
The diskfull nodes may end up with slightly more or less disk
for users.

>So my questions is, do you think this would work?  How well do you
>think this would work?  Do you think the network delays would be
>excessive?

Sure, it would work.  The key for network delays is really human
factors.  A delay of one second is painfull.  Ten delays of one
tenth of a second may not be.  A longer delay can be tolerated
if it is more consistent.

The Harvard approach has several advantages:

1. It is incremental.
   They bought the hardware a little at a time.  Set each system
   up & put it into production.  As the load got worse, they could
   add new systems.  The system started with one CPU.

2. The load balancing has pretty low overhead.

3. The load balancing system was incrementally implemented.
   There was very little delay getting it going.

4. No computers were special cased beyond what data was on their
   disks, and what peripherals happened to be attached.

5. When network loads became high, a bridge was installed.

6. Redundancy of hardware and software allowed tuning the
   reliability of the overall system.

Some of the disadvantages:

1. The system becomes more complicated as more systems are
   added.  I've never seen so many symbolic links.

2. It is highly dependent on NFS.  NFS does not preserve UNIX
   filesystem semantics well.  The NFS code that was available
   three years ago cause machines to crash daily.  This later
   has improved.

3. NFS has noticeable latency.  NFS has noticeable additional
   overhead.  It is far faster and more consistent to use
   local disk.  The efficiency is not has high as you'd like,
   and the speed is not as high as you'd like.

4. It is easy to create single point failures that aren't apparent
   until they bite you.  Obviously, if /usr/bin only exists on one
   system, if it goes down, everything is pretty useless.

One system I've used that seems to solve most of these problems
is the Sequent.  The symmetric multiprocessor gives you fine
grained dynamic load balancing, and incremental expansion.  You
also get the ability to run a single application quickly, if it
happens to be easily parallelizable.  The administrative overhead
is low, and does not expand with processors.  There seems to be
no penalty for shared disk (no NFS is required).  Its biggest
disadvantage is that the minimum cost is higher.

Naturally, the best solution is the one that the office here is
using: everyone gets their own hardware.  If a system goes down,
most everyone else continues as if nothing is wrong.  System
response is consistent.  If you need data from some other host,
it is available via the network - NFS or whatever.  Since they
are 386 and 486 systems running our OS and software, DOS stuff
runs too.  Thus, I'd recommend getting lots and lots of 386 or
486 system, with lots of RAM, disk, tape units, network cards,
big screens, and of course, INTERactive UNIX.

Stephen Uitti
suitti@ima.isc.com
Interactive Systems, Cambridge, MA 02138-5302
--
"We Americans want peace, and it is now evident that we must be prepared to
demand it.  For other peoples have wanted peace, and the peace they
received was the peace of death." - the Most Rev. Francis J. Spellman,
Archbishop of New York.  22 September, 1940

suitti@ima.isc.com (Stephen Uitti) (11/15/90)

In article <1990Nov14.154322.8894@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>In article <3849@vela.acs.oakland.edu> tarcea@vela.acs.oakland.edu (Glenn Tarcea) writes:
>>
>>  I also find it interesting that IBM has decided to go with the clustering
>>concept for their mainframes. Although it seems to me it would be a lot
>>cheaper and better for the customer to buy 10 $30,000 workstations and
>>cluster them together, rather than 3 $22,000,000 mainframes (run yeck! MVS)
>>and cluster them together.
>
> Suppose you wanted a system to manage huge databases.  You needed strong
>integrity controls for concurrent database updates.  You needed to access the
>data in a huge room packed to the gills with disk drives.  You needed to be
>able to access the same data from any CPU in the system.  You couldn't
>tolerate the performance hit of the bottleneck caused by pumping all the data
>down an ethernet.
>
> You just might find the mainframes a better solution than the workstations.

I'm not convinced that the ethernet would be the bottleneck.  If it
were, the net could be partitioned.  If you can get the transaction
rates high enough for the individual machines, and if your data can
be partitioned properly, and if your plan allows easy expansion with
new nodes, the cost effectiveness of the workstations may give them
an edge.  I wouldn't use a million Commodore C64's, though.

You might find a single Connection Machine to be better than any
of the above.  If you have a single large application, a single
high-end wierd machine may be the answer.  A collection of
commodity disks on a CM can give you gigibytes of data access
with amazing bandwidth.  There is probably enough CPU for you too.

If you have hundreds of little jobs, I'd go with hundreds of smaller
processors.

> IBM didn't get that big by ignoring its customers' needs and
> forcing them to buy an excessively expensive and underperforming
> system.  Instead they carefully monitored those needs, and
> evolved their hardware and software to meet them.

I'm convinced that they got where they are mainly through
marketing - something I'd recommend to anyone with the bucks.

Stephen Uitti.
suitti@ima.isc.com

"We Americans want peace, and it is now evident that we must be prepared to
demand it.  For other peoples have wanted peace, and the peace they
received was the peace of death." - the Most Rev. Francis J. Spellman,
Archbishop of New York.  22 September, 1940

bzs@world.std.com (Barry Shein) (11/15/90)

> Suppose you wanted a system to manage huge databases.  You needed strong
>integrity controls for concurrent database updates.  You needed to access the
>data in a huge room packed to the gills with disk drives.  You needed to be
>able to access the same data from any CPU in the system.  You couldn't
>tolerate the performance hit of the bottleneck caused by pumping all the data
>down an ethernet.

No, but FDDI runs at near bus speeds, so you're talking old technology
for an application like this. IBM/370 drives run at about 5 MB/s max,
not much a challenge for fiber, tho the software can be challenging
for other reasons. Most of the advantage of IBM mainframes is that
they *are* a distributed operating system, disk channels have
significant processing power. One can do things like send a disk
channel off looking for a database record and go back to something
else until the answer is found. So we're sort of talking in circles
here.

> IBM didn't get that big by ignoring its customers' needs and forcing
>them to buy an excessively expensive and underperforming system.
>Instead they carefully monitored those needs, and evolved their
>hardware and software to meet them.

That's an interesting theory.
-- 
        -Barry Shein

Software Tool & Die    | {xylogics,uunet}!world!bzs | bzs@world.std.com
Purveyors to the Trade | Voice: 617-739-0202        | Login: 617-739-WRLD

ian@sibyl.eleceng.ua.OZ (Ian Dall) (11/15/90)

In article <16364@s.ms.uky.edu> randy@ms.uky.edu (Randy Appleton) writes:
>But how does each user, who is about to log in, know which machine to
>log into?  He ought to log into the one with the lowest load average, yet
>without logging on cannot determine which one that is.

I do just that! I have a little shell script called "least-loaded"
which grunges through the output of ruptime. So when X starts up it
does "rsh `least-loaded <list of servers>` ...." to start my clients.
I also do this when I have a compute bound job to run.

The only catch is that all servers need to be capable of running your
job. We have several servers set up with NFS cross mounting so they
are *almost* identical. You can get caught out sometimes though.
Also, NFS imposes an additional overhead. Running a compute bound
process this way is fine, running an IO bound process this way might
be a bad idea if the disk it accesses is physically on another server.

In short, I think it is a good idea, but it needs a more efficient
distributed file system before I would want to release it on Joe User.
It would be really nice to have a distributed OS which was able to
migrate IO bound processes to minimise network traffic and migrate
cpu bound processes to the least loaded machine. Dream on!

-- 
Ian Dall     life (n). A sexually transmitted disease which afflicts
                       some people more severely than others.       

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (11/15/90)

In article <16364@s.ms.uky.edu>, randy@ms.uky.edu (Randy Appleton) writes:
> I have been wondering how hard it would be to set up several 
> of the new fast workstations as one big Mainframe.
> The advantage is that this seup should be able to deliver a fair 
> number of MIPS to a large number of users at very low cost.
> So my questions is, do you think this would work?  How well do you
> think this would work?

Yes it did work.  Cambridge University had a wall full of 68000s
doing this sort of thing about 7 years ago (it had been running for
some time, that's just when I saw it).
-- 
The problem about real life is that moving one's knight to QB3
may always be replied to with a lob across the net.  --Alasdair Macintyre.

buck@siswat.UUCP (A. Lester Buck) (11/16/90)

In article <16364@s.ms.uky.edu>, randy@ms.uky.edu (Randy Appleton) writes:
> I have been wondering how hard it would be to set up several 
> of the new fast workstations as one big Mainframe.
> 
> The idea is that as each person logs into the Login Server, their login 
> shell is acually a process that looks for the least loaded Server, and 
> rlogin's them to there.

At least one instantiation of exactly this scheme is in the works.  The
Superconducting Supercollider experiment design laboratory put out a Request
For Proposal (one of many to come) six months ago that called for such a
pool of interactive server machines with a login server to route the initial
connection.  I haven't followed if the contract has been awarded yet.

The SSC would be a great place for a truly distributed OS like
Amoeba, but their RFP for a batch farm specifically called for
tightly coupled shared memory multiprocessors.  Maybe they will
wake up someday and check out the newer technology.

-- 
A. Lester Buck    buck@siswat.lonestar.org  ...!uhnix1!lobster!siswat!buck

eugene@nas.nasa.gov (Eugene N. Miya) (11/16/90)

In article <16364@s.ms.uky.edu> randy@ms.uky.edu (Randy Appleton) writes:
TOO MANY NEWS GROUPS....
>I have been wondering how hard it would be to set up several 
>of the new fast workstations as one big Mainframe.

Physically: not very hard, these days.  Software will be your problem.
You must be an aspiring hardware type 8^).  You would have to totally
rewrite many existing applications, a few from scratch.  You are
making the assumption that the sum of workstations == a mainframe.
This may not be the case.  [Gestalt: the whole is great than the sum of
the parts.]  Perhaps adding computation late in the
game will only make the computation slower (later).  This will depend
very much on the nature of the application: language, algorithm,
communication cost, etc.  Seek and yea shall find some hypercubes.

>But how does each user, who is about to log in, know which machine to
>log into?  He ought to log into the one with the lowest load average, yet
>without logging on cannot determine which one that is.

>some such thing), so as to presurve the illusion that this is one big
					 ^^^^^^^^ sometimes a dangerous word.

>What would be nice ....
>The idea is that as each person logs into the Login Server, their login 
>shell is acually a process that looks for the least loaded Server, and 
>rlogin's them to there.  This should distribute the load evenly 
>(well, semi-evenly) on all the servers.
.....
>you got (which ought to be invisible) you saw the same files.
			    ^^^^^^^^^ [aka transparent]
This is a dream which as existed since the 1960s with the ARPAnet.
It has seen limited implementation.  There are many subtle problems,
but it interests a lot of people: load oscillation effects,
reliability, error handling, etc.  Performance gains are needed to
solve future problems.

>So my questions is, do you think this would work?  How well do you
>think this would work?  Do you think the network delays would be excessive?

Network delays are probably the least of your problems.  Again, it will
for for limited applications.  The problem is basically a software
problem.  If you don't have it, it won't work.  Depends on your basic
building blocks (some of OUR applications require 128-bit
floating-point).  I have an officemate who is absolutely fed up
debugging distributed applications (graphics).

  Neil W. Rickert writes:

| Suppose you wanted a system to manage huge databases.
|You just might find the mainframes a better solution than the workstations.
|
| IBM didn't get that big by ignoring its customers' needs and forcing them to
|buy an excessively expensive and underperforming system.  Instead they
|carefully
|monitored those needs, and evolved their hardware and software to meet them.

Sure, for some DBMS applications (and systems), but you must be posting
from an IBM system 8^).  Many workstations are not excessively
expensive, and quite a few are very fast, even faster than mainframes.
But, please, if you wish, stick with dinodaurs.

--e. nobuo miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov
  The Other Eugene.
  {uunet,mailrus,other gateways}!ames!eugene

zs01+@andrew.cmu.edu (Zalman Stern) (11/17/90)

randy@ms.uky.edu (Randy Appleton) writes:
> I have been wondering how hard it would be to set up several 
> of the new fast workstations as one big Mainframe.  For instance,
> imagine some SPARCstations/DECstations set up in a row, and called
> compute servers.  Each one could handle several users editing/compiling/
> debugging on glass TTY's, or maybe one user running X.

This has been done with mostly "off the shelf" technology at Carnegie
Mellon. The "UNIX server" project consists of 10 VAXstation 3100's (CVAX
processors) with a reasonable amount of disk and memory. These are provided
as a shared resource to the Andrew community (three thousand or more users
depending on who you talk to). The only other UNIX systems available to
Andrew users are single login workstations. (That is, there is nothing
resembling a UNIX mainframe in the system.)

> 
> But how does each user, who is about to log in, know which machine to
> log into?  He ought to log into the one with the lowest load average, yet
> without logging on cannot determine which one that is.

For the UNIX servers, there is code in the local version of named (the
domain name server) which returns the IP address of the least loaded server
when asked to resolve the hostname "unix.andrew.cmu.edu". (The servers are
named unix1 through unix10 .) I believe a special protocol was developed
for named to collect load statistics but I'm not sure. As I recall, the
protocol sends information about the CPU load, number of users, and virtual
memory statistics.

Note that all asynch and dialup lines go through terminal concentrators
(Annex boxes) onto the CMU internet.

> [Stuff deleted.]
> ...  Also, the login server could have 
> all the disks, and the others would mount them, so that no matter what node
> you got (which ought to be invisible) you saw the same files.

Andrew uses Transarc's AFS 3.0 distributed filesystem product to provide
location transparent access to files from any workstation or UNIX server in
the system. There are other problems which are solved via AFS components as
well.  For example, traditional user name/password lookup mechanisms fail
badly when given a database of 10,000 registered users. AFS provides a
mechanism called the White Pages for dealing with this. (Under BSD UNIX,
one can use dbm based passwd files instead.)

If you want more info on the UNIX server project, send me mail and I'll put
you in touch with the appropriate people. Detailed statistics are kept on
the usage of these machines. Using these numbers, one could probably do
some interesting cost/performance analysis.

> 
> The advantage is that this seup should be able to deliver a fair 
> number of MIPS to a large number of users at very low cost.  Ten SPARC
> servers results in a 100 MIPS machine (give or take) and at University
> pricing, is only about $30,000 (plus disks).  Compare that to the price
> of a comparabe sized IBM or DEC!
> 
> So my questions is, do you think this would work?  How well do you
> think this would work?  Do you think the network delays would be
> excessive?

Yes, what you describe could be easily done with ten SPARCstations and a
small amount of software support. It is not clear that it is useful to
compare the performance of such a system to that of a mainframe though. It
depends a lot on what the workload is like. Also, there are other points in
the problem space. Three interesting ones are tightly coupled
multi-processors (SGI, Encore, Pyramid, Sequent, Solbourne), larger UNIX
server boxes (MIPS 3260s or 6280s, IBM RS/6000s, faster SPARCs, HP, etc.),
and 386/486 architecture commodity technology (PC clones, COMPAQ
SystemPRO). Certainly, DEC VAXen and IBM 370s do not provide cost effective
UNIX cycles but that is not the market for that type of machine.  Intuition
tells me that the best solution is very dependent on your workload and the
specific prices for different systems.

Zalman Stern, MIPS Computer Systems, 928 E. Arques 1-03, Sunnyvale, CA 94086
zalman@mips.com OR {ames,decwrl,prls,pyramid}!mips!zalman

rsalz@bbn.com (Rich Salz) (11/20/90)

In <1572@svin02.info.win.tue.nl> rcpieter@svin02.info.win.tue.nl (Tiggr) writes:
>Andrew Tanenbaum et al solved this problem by developing Amoeba.
Some solution.  Throw out much of your software, and lots of your hardware
(if you want the much-touted Amoeba speed) and run a clever O/S unlike
anything commercially available.

Heterogeneity is the wave of the future.
	/r$
-- 
Please send comp.sources.unix-related mail to rsalz@uunet.uu.net.
Use a domain-based address or give alternate paths, or you may lose out.

alanw@ashtate (Alan Weiss) (12/15/90)

In article <16364@s.ms.uky.edu> randy@ms.uky.edu (Randy Appleton) writes:
>I have been wondering how hard it would be to set up several 
>of the new fast workstations as one big Mainframe.  For instance,
>imagine some SPARCstations/DECstations set up in a row, and called
>compute servers.  Each one could handle several users editing/compiling/
>debugging on glass TTY's, or maybe one user running X.
>
>But how does each user, who is about to log in, know which machine to
>log into?  He ought to log into the one with the lowest load average, yet
>without logging on cannot determine which one that is.
.......

You are referring to Process Transparency (which actually can be
implemented at the task, process, or thread level).  The leaders
in this kind of work are Locus Computing Corp. in Inglewood and
Freedomnet in North Carolina.  LCC's product, the Locus Operating
System, formed the basis for IBM Corp.'s Transparent Computing
Facility (TCF), which allowed for a distributed, heterogeneous,
transparent filesystem AND process system.  It was first implemented
back in the early 1980's on VAXen, Series 1's, and 370's.  The first
commericial product, AIX/370 and AIX/PS/2 (386) offer TCF.

(I used to work for Locus, but have no commerical interest currently).

For more information, contact IBM or Locus Computing - 213-670-6500,
and mention TCF.

As an aside, watching process execute with or without user knowledge
as to execution location, as well as watching processes migrate
while executing to other systems, is Neat.  Wanna run that
geological survey program?  Let /*fast*/ find the quickest site!


.__________________________________________________________.
|-- Alan R. Weiss --     | These thoughts are yours for the| 
|alanw@ashton		 | taking, being generated by a    |
|alanw@ashtate.A-T.COM	 | failed Turing Test program.     |
|!uunet!ashton!alanw	 |---------------------------------|
|213-538-7584		 | Anarchy works.  Look at Usenet! |
|________________________|_________________________________|