[comp.arch] X-terms v. PCs v. Workstations

bzs@world.std.com (Barry Shein) (11/25/89)

I love when technically-oriented people start arguing about the
relative merits of X-terms, PC's, workstations and centralized
facilities.

The technology involved is necessary, but hardly sufficient. Once the
various technologies get "good enough" (a term technical people often
find hard to grasp since they usually define "good enough" as "that
which I want to buy next") the issues broaden.

The most important issues quickly become politics and administration.
Who tells me what I can do with my system, who administers it?

If people are resisting going to X-terms rather than PCs/workstations
even tho one keeps showing them that technically the former as as good
or better it might be because they are sick of the rules of the
centralized computer. For example, 80MB disks are cheap-o things these
days, how many of you folks who run time-sharing shudder at the
thought of users who would use 80MB on your central system? Just about
all of you. Yet in the PC world that much disk space is peanuts. How
many of your systems limits users to, say, 1MB or so of disk space?
That's not even one whole 3 1/2" floppy.

Ok, a lot of that 80MB ends up "wasted" because of duplication of
software and libraries. So what, replace it in the argument with 40MB
user space on your system, or a 200MB disk drive and 100+MB user
space, still cheap-o. You can run, but you can't hide.

And who's going to back it up and all that? Oh, let us *do* wring our
hands and act o-so-concerned for *them*. It's a new experience!

Well, in a lot of cases, who cares. Escaping out from under
centralized tyranny is more important (at the moment of decision) than
who's going to make the trains run on time once you're free. Put your
important data files to floppies or cartridge tapes (it's easy, don't
need $100K worth of operators to do that) and pray for the best.

That's why so many centralized facilities are jumping at becoming
network administrators and proselytes of X-terminals.

Re-centralization of authority, yum-yum.

Not that there's anything wrong with X-terminals, I like them, but
let's be honest about motives: How ya gonna keep them all down on the
farm once they've been to Paree'?

Integrated, multi-level computer architectures are the ultimate
answer, not hidden agendas.
-- 
        -Barry Shein

Software Tool & Die, Purveyors to the Trade         | bzs@world.std.com
1330 Beacon St, Brookline, MA 02146, (617) 739-0202 | {xylogics,uunet}world!bzs

jdd@db.toronto.edu (John DiMarco) (11/28/89)

This posting is more than 140 lines long. 

A centralized authority -- if it is responsive to the needs of its users --
has the capability to offer better facilities and support at a lower price.

Just consider some of the relevant issues:

	Resource duplication: If every group must purchase its own resources,
			      resources which are used only occasionally will
	either be unavailable because no small group can afford the requisite
	cost (eg. Phototypesetters, supercomputers, Put_your_favourite_expen-
	sive_doodad_here), or be duplicated unnecessarily (eg. laser printers,
	scanners, put_your_favourite_less_expensive_doodad_here). A centralized
	authority can provide access to computing resources which are
	otherwise unavailable, and can provide more economical access to 
	resources which would otherwise be unnecessarily duplicated.

	Maximum single-point usage: If each group must purchase its own 
				    computing equipment, at no point in time
	can any group utilize more computing resources than that group owns.
	But in a centralized environment, the maximum amount of computing
	resources available to any one group increases to the total computing
	resources available to the centralized authority, a much greater
	amount. If you have a distributed computing environment, imagine
	putting all the memory and CPUs of your workstations into one massive
	multiprocessing machine, for example. Imagine if your group could
	use the idle cycles of the group down the hall. Wouldn't that be nice?

	Security: It is much easier to keep a centralized computing environment
		  secure than a distributed one. Responsibilities are clearer,
	superuser privileges are better defined, and response time is better.
	Imagine having to respond to a security threat when you have to 
	notify umpteen million sysadmins, all of whom have to respond correctly
	to eliminate the threat, but none of whom are responsible to any 
	central authority. Imagine not knowing who is in charge of each 
	machine.

	Expertise: Distributed sites tend to have too many people playing
		   at system administration in their own little fiefdoms, 
	few of whom know what they are doing. (But guess who goes screaming
	to whom when something goes wrong...) In a centralized environment,
	it is much easier to ensure that the people who are in charge of
	the computers are competent and capable.
	
	Emergencies: If something goes wrong in a centralized system, it is
		     invariably obvious who should be called. Lots of highly
	qualified people will jump on the problem and fix it PDQ. If something
	goes wrong on some little group's machine, who do you call? It's often
	not clear. And who will fix the problem? That's often not clear either.
	Frequently the problem doesn't get fixed quickly. 
	
	Backups: They're a pain to do. Why not have a centralized authority
		 do backups for everybody automatically, rather than have
	everybody worry about their own backups? Otherwise someone, somewhere
	will be lazy and/or make a mistake and lose something crucial.

	Complexity: Who's going to keep track of a big distributed network
		    mishmash with no central authority? Who's going to answer
	the question "How do I get there from here?" if getting there from
	here involves passing through any number of un-cooperative little
	domains. Who's going to track down the machine which throws bogus 
	packets onto the network, fouling up all sorts of other machines? 
	In a centralized environment, things are generally less complex, 
	and those in charge have a much better understanding of the whole
	shebang.

	Downtime: Centralized computing authorities tend to do their
		  best to keep their machines running all the time. And 
	they generally do a good job at it, too. If a central machine goes
	down, lots of good, qualified people jump into the fray to get the
	machine up again. This doesn't usually happen in a distributed
	environment. 

	Maintenance: More things go wrong with many little machines than
		     with few big ones, because there are so many more machines 
	around to fail. Repair bills? Repair time? For example, I'd be 
	surprised if the repair/maintenance cost for a set of 100 little SCSI 
	drives on 100 different workstations is less than for half-a-dozen 
	big SMD drives on one or two big machines, per year.

	Compatibility: If group A gets machine type X and group B gets machine
		       type Y, and they subsequently decide to work together
	in some way, who is going to get A's Xs and B's Ys talking together? 

These are some very good reasons to favour a centralized computing authority.


bzs@world.std.com (Barry Shein) writes:

>The most important issues quickly become politics and administration.
>Who tells me what I can do with my system, who administers it?

Yes, sometimes politics can override technology in system implementation
decisions. If politics are a problem, fix it. But politics differs from
organization to organization, so any politically-motivated system decision 
at one organization will most probably not be applicable to another.

> [ Barry writes about how centralized authorities limit users to very little
>   disk space, when these very users can buy cheap disks which gives them
>   all the disk space they want. ]

If a centralized computing authority is not responsive to the needs of its
users, it's got a problem. If users need more disk space, they should get it.
A centralized computing authority's sole raison d'etre is to serve its users.
There's no reason why a centralized computing authority can't buy cheap SCSI
disks, for example, and hang them off one of their central machines, if that
is what the users need. A sick centralized computing authority which is
not responsive to user needs should be cured, not eliminated.
If you're stuck with a defective centralized computing authority, then
perhaps a move to a distributed computing environment could be justified.
Nevertheless, IMHO, a distributed computing environment is still inferior to a 
well-run centralized environment.

>Well, in a lot of cases, who cares. Escaping out from under
>centralized tyranny is more important (at the moment of decision) than
>who's going to make the trains run on time once you're free. Put your
>important data files to floppies or cartridge tapes (it's easy, don't
>need $100K worth of operators to do that) and pray for the best.

And enjoy the disasters. These come thick and heavy when every Tom, Dick, and
Harry (Tammy, Didi, and Harriet?) tries to run his (her) own computing system.

>That's why so many centralized facilities are jumping at becoming
>network administrators and proselytes of X-terminals.

Maybe because they're sick of trying to get people out of messes? People
who are over their heads in complexity? People who are panicky, worried, 
and desperate?

>Not that there's anything wrong with X-terminals, I like them, but
>let's be honest about motives: How ya gonna keep them all down on the
>farm once they've been to Paree'?

That's not a problem. They'll all be a'streamin back from Paree' wid nuttin in
their pockets an' wid bumps an' bruises all over. 

>        -Barry Shein
>Software Tool & Die, Purveyors to the Trade         | bzs@world.std.com
>1330 Beacon St, Brookline, MA 02146, (617) 739-0202 | {xylogics,uunet}world!bzs

John
--
John DiMarco                   jdd@db.toronto.edu or jdd@db.utoronto.ca
University of Toronto, CSRI    BITNET: jdd%db.toronto.edu@relay.cs.net
(416) 978-8609                 UUCP: {uunet!utai,decvax!utcsri}!db!jdd

quiroz@cs.rochester.edu (Cesar Quiroz) (11/28/89)

In <1989Nov27.144016.23181@jarvis.csri.toronto.edu>, jdd@db.toronto.edu (John DiMarco) wrote:
| A centralized authority -- if it is responsive to the needs of its users --
| has the capability to offer better facilities and support at a lower price.

Your choice of name (centralized AUTHORITY) actually says a lot.
Centralized resources are naturally seen as loci of power, not as
sources of service.  If you can fix the human tendency to build
empires, you can make centralized resources more palatable to those
who could otherwise prefer to not to be bothered with asking you for
permissions.  

Other than that, here go some alternative views on one of the
supporting arguments.

| 	Maximum single-point usage: If each group must purchase its own 
| 				    computing equipment, at no point in time
| 	can any group utilize more computing resources than that group owns.
| 	But in a centralized environment, the maximum amount of computing
| 	resources available to any one group increases to the total computing
| 	resources available to the centralized authority...

Maybe true of highways, not true of computers.  It is not at all
clear that just because you need extra cycles and they are free
after sunset you will be graciously granted those.  You may have to
beg to be permitted in the building after dark, etc...

| 	Imagine if your group could use the idle cycles of the group
| 	down the hall.  Wouldn't that be nice?

Sure.  Just cut the middleman and talk to the people down the hall.
You wouldn't suggest imposing on them, right?  And they may want to
have a chance at sharing your coffeemaker, or have a quick run on
your laserprinter.  Why do you need to introduce a third,
unproductive, party in this nice scenario?

Some of your scenarios oppose a benevolent tyranny that controls all
the expertise against a decadent anarchy where no one has enough
smarts to tie his own shoes.  There are intermediate steps in
between, you know.  Like places where the staff responds to the
users, instead of the other way around.  Also, distributed
responsibility for your own resources does not preclude a
cooperative condition, open to sharing when so needed.


-- 
                                      Cesar Augusto Quiroz Gonzalez
                                      Department of Computer Science
                                      University of Rochester
                                      Rochester,  NY 14627

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (11/28/89)

  We have cpu servers and file servers. Those are centralized resources.
The users run on PCs, terminals (not just X), workstations, etc. The
user can have local control and resources for frequest tasks, while
keeping the economy of scale which come from a server.

  Sharing of the local resources is done on a personal request basis,
rather than being ordained by some central authority. Usually this takes
the form of A letting B run something at low priority.

  Most high cost peripherals are attached to a shared machine.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
"The world is filled with fools. They blindly follow their so-called
'reason' in the face of the church and common sense. Any fool can see
that the world is flat!" - anon

ge@kunivv1.sci.kun.nl (Ge' Weijers) (11/28/89)

jdd@db.toronto.edu (John DiMarco) writes:

>A centralized authority -- if it is responsive to the needs of its users --
>has the capability to offer better facilities and support at a lower price.

- I've one seen a bill for use of a VAX780 for one year with 10 people.
  It was about $500,000.- (1987). You could buy a faster Sun-3 AND hire
  an operator for less ($1,500,000.- in three years!)

>Just consider some of the relevant issues:

>	Resource duplication: 

In the case of VERY expensive systems you may have a point. Let us look at it:

- laser printers: low-volume high-quality laser printers cost about $30,000.-
  Paying someone to manage a central high-volume printer (more expensive,
  less reliable) costs a lot more, and you need to walk over there to get
  your output. A local printer is more convenient.

- Phototypesetters: either your organisation typesets a LOT, or you go to
  a service bureau to have your text typeset. You think you can beat their
  prices?

- Supercomputers: having your own mini-super (1/10th Cray) gives a
  better turnaround time than sharing a Cray with > 10 people. And it's
  more predictable.

>	Maximum single-point usage: 

- the problem with central 'mainframes' is that you can't predict the response
  time (turnaround time). If I know something is going to take 2 hours I can
  go off and do something useful. If it might take 1 to 5 hours I can't plan my
  day, or I must assume it takes 5.
  (User interface tip: give the user of your software an indication of the
  time a complex operation is going to take. Especially if he can get a cup
  of coffee in that time)

>	Expertise: 

- I've spent enough time explaining evening-shift operators what to do to
  know that only one or two people in the Computer Center really know what
  they are talking about, and they never do nightshifts. If I've offended
  someone, sorry, but that is usually the case.

>	Emergencies:

- Similar. What do YOU do when a SunOS kernel keeps 'panic'ing, and you don't
  have the sources. (Or VM/370 does something similar, and your user-program is
  at fault :-) ). 

>	Frequently the problem doesn't get fixed quickly. 
- If they're simple enough to fix I'll do it myself. If not, I call the 
  manufacturer.

>	Backups: They're a pain to do. Why not have a centralized authority
>		 do backups for everybody automatically, rather than have
>	everybody worry about their own backups?

- With 2Gbyte DAT tapes backing up is just starting the backup and going home.
  They're not even expensive, and an extra backup fits in your pocket
  so you can store it at home. (Assuming your data is not very sensitive.)

>	Complexity: Who's going to keep track of a big distributed network
>		    mishmash with no central authority? Who's going to answer
>	the question "How do I get there from here?" if getting there from
>	here involves passing through any number of un-cooperative little
>	domains. Who's going to track down the machine which throws bogus 
>	packets onto the network, fouling up all sorts of other machines? 
>	In a centralized environment, things are generally less complex, 
>	and those in charge have a much better understanding of the whole
>	shebang.

- I agree that a network backbone should be a 'central' service, just like
  the phone company. Someone must play FCC/ZZF/PTT and only let decent
  equipment on the net. If you mess up the net you get disconnected.

>	Downtime: Centralized computing authorities tend to do their
>		  best to keep their machines running all the time. And 
>	they generally do a good job at it, too. If a central machine goes
>	down, lots of good, qualified people jump into the fray to get the
>	machine up again. This doesn't usually happen in a distributed
>	environment. 

- A central maching going down stop all work in all departments. If my
  workstation quits I alone have a problem.

>	Maintenance: More things go wrong with many little machines than
>		     with few big ones, because there are so many more machines 
>	around to fail. Repair bills? Repair time? For example, I'd be 
>	surprised if the repair/maintenance cost for a set of 100 little SCSI 
>	drives on 100 different workstations is less than for half-a-dozen 
>	big SMD drives on one or two big machines, per year.

- It is. You throw them away and take a new one. This is the same repair
  that you'll get when you take a maintenance contract. If you have a
  100 workstations buy 3 extra and use them as exchange for broken ones.
  Paying for each repair is usually cheaper.
  SCSI disks have a MTBF of 5 years these days. You
  usually buy a new one before they fail, because your workstation gets
  obsolete. There is no such thing as maintenance for a small SCSI drive.
  (although a stuck Quantum 50 can sometimes be 'fixed' by hitting it.
  You trust such a beast? Quantum has fixed the problem now.)

>	Compatibility: If group A gets machine type X and group B gets machine
>		       type Y, and they subsequently decide to work together
>	in some way, who is going to get A's Xs and B's Ys talking together? 

- Know of one Killer Micro that does NOT run a brand of Unix with TCP/IP and
  NFS on it? (even Macintoshes and PCs support these protocols nowadays)

>These are some very good reasons to favour a centralized computing authority.

- Depends on the kind of user you have. If your organisation does 
  data/transaction processing there are very good reasons around.
  If >20% of your clients know as much as you do it's a losing game.
  A central facility can't adapt as easily to the wishes of their
  clients.


Ge' Weijers                                    Internet/UUCP: ge@cs.kun.nl
Faculty of Mathematics and Computer Science,   (uunet.uu.net!cs.kun.nl!ge)
University of Nijmegen, Toernooiveld 1         
6525 ED Nijmegen, the Netherlands              tel. +3180612483 (UTC-2)

jdd@db.toronto.edu (John DiMarco) (11/29/89)

quiroz@cs.rochester.edu (Cesar Quiroz) writes:

>In <1989Nov27.144016.23181@jarvis.csri.toronto.edu>, jdd@db.toronto.edu (John DiMarco) wrote:
>| A centralized authority -- if it is responsive to the needs of its users --
>| has the capability to offer better facilities and support at a lower price.
>Centralized resources are naturally seen as loci of power, not as
>sources of service.  If you can fix the human tendency to build
>empires, you can make centralized resources more palatable to those
>who could otherwise prefer to not to be bothered with asking you for
>permissions.  
Power isn't necessarily a bad thing. If those with power (i.e. control over
the computing resources) use this power to serve the users' needs, then 
everybody is happy. If a centralized computing authority does not serve
the users' needs, then it is not fulfilling its intended role. Unless the
members of this authority use their 'power' for the good of their users, THEY
ARE NOT DOING THEIR JOBS. Judging from some of these postings, there seems
to be quite a few centralized computing authorities who are not doing their
jobs.

>| 	Maximum single-point usage: If each group must purchase its own 
>| 				    computing equipment, at no point in time
>| 	can any group utilize more computing resources than that group owns.
>| 	But in a centralized environment, the maximum amount of computing
>| 	resources available to any one group increases to the total computing
>| 	resources available to the centralized authority...

>Maybe true of highways, not true of computers.  It is not at all
>clear that just because you need extra cycles and they are free
>after sunset you will be graciously granted those.  You may have to
>beg to be permitted in the building after dark, etc...

If everyone is being served from the same big pot, what one group doesn't
use is available to everyone else. 

>| 	Imagine if your group could use the idle cycles of the group
>| 	down the hall.  Wouldn't that be nice?

>Sure.  Just cut the middleman and talk to the people down the hall.
>You wouldn't suggest imposing on them, right?  And they may want to
>have a chance at sharing your coffeemaker, or have a quick run on
>your laserprinter.  Why do you need to introduce a third,
>unproductive, party in this nice scenario?

If both you and the party down the hall use the same large computing resources,
what they don't use, you can use. If both you and the party down the hall
have private computing resources, you have little or no access to their
free cycles, and vice versa. 

>Some of your scenarios oppose a benevolent tyranny that controls all
>the expertise against a decadent anarchy where no one has enough
>smarts to tie his own shoes.  There are intermediate steps in
>between, you know.  Like places where the staff responds to the
>users, instead of the other way around.  Also, distributed
>responsibility for your own resources does not preclude a
>cooperative condition, open to sharing when so needed.

Distributed systems seem to lead to anarchy. The more distributed, the more
anarchy. There are only so many people who have enough smarts to tie their
own shoes, and in a distributed setup, where do you put them? If a small
group is lucky enough to have a guru/wizard, that group is ok. Otherwise, 
that group will have serious problems. Under a centralized setup, it's much 
easier to ensure that the computing facility has a few guru/wizards around.

>-- 
>                                      Cesar Augusto Quiroz Gonzalez
>                                      Department of Computer Science
>                                      University of Rochester
>                                      Rochester,  NY 14627

jdd@db.toronto.edu (John DiMarco) (11/29/89)

ge@kunivv1.sci.kun.nl (Ge' Weijers) writes:

>jdd@db.toronto.edu (John DiMarco) writes:

>>A centralized authority -- if it is responsive to the needs of its users --
>>has the capability to offer better facilities and support at a lower price.

>- I've one seen a bill for use of a VAX780 for one year with 10 people.
>  It was about $500,000.- (1987). You could buy a faster Sun-3 AND hire
>  an operator for less ($1,500,000.- in three years!)

Can the Vax and get a machine that's cheaper to run. Centralized computing
doesn't need to be done on Vax 780s, which are extremely expensive to 
maintain these days. 

>>	Resource duplication: 
>In the case of VERY expensive systems you may have a point. Let us look at it:

>- laser printers: low-volume high-quality laser printers cost about $30,000.-
>  Paying someone to manage a central high-volume printer (more expensive,
>  less reliable) costs a lot more, and you need to walk over there to get
>  your output. A local printer is more convenient.

Half a dozen laserwriters cost lots more than $30K. And they're slower, too.
True, it's handy not to have to walk much to get your output.

>- Supercomputers: having your own mini-super (1/10th Cray) gives a
>  better turnaround time than sharing a Cray with > 10 people. And it's
>  more predictable.

Maybe. But there are large jobs that don't run on mini-supers. Note that
sharing a resource (like a supercomputer) doesn't necessarily mean sharing
that resource with lots of people at the same time. If group A and group B
both need to occasionally run jobs which max out a Cray, there's no reason why 
they can't take turns. 

>>	Maximum single-point usage: 

>- the problem with central 'mainframes' is that you can't predict the response
>  time (turnaround time). If I know something is going to take 2 hours I can
>  go off and do something useful. If it might take 1 to 5 hours I can't plan my
>  day, or I must assume it takes 5.

Good point. But if your choices are 10 hours on a private machine or 1-5 hours
on a mainframe, which are you going to pick? It clearly depends on the job,
the machines available, etc. Response time prediction isn't necessarily a
good reason to move to a distributed system.

>>	Expertise: 

>- I've spent enough time explaining evening-shift operators what to do to
>  know that only one or two people in the Computer Center really know what
>  they are talking about, and they never do nightshifts. If I've offended
>  someone, sorry, but that is usually the case.

I don't see how this would be improved under a distributed setup. At least
in a centralized system, you have a computer center to call. But if you 
belong to a small group with its own small computing facilities and no
resident guru, who are you going to call, either at night or during the day? 

>>	Backups: They're a pain to do. Why not have a centralized authority
>>		 do backups for everybody automatically, rather than have
>>	everybody worry about their own backups?

>- With 2Gbyte DAT tapes backing up is just starting the backup and going home.
>  They're not even expensive, and an extra backup fits in your pocket
>  so you can store it at home. (Assuming your data is not very sensitive.)

Then every little group needs to have its own DAT/Exabyte/CD RAM/whatever 
backup unit. Why not buy only a couple of (large) backup units for a 
centralized facility and spend the rest of the money on something else?

>>	Downtime: Centralized computing authorities tend to do their
>>		  best to keep their machines running all the time. And 
>>	they generally do a good job at it, too. If a central machine goes
>>	down, lots of good, qualified people jump into the fray to get the
>>	machine up again. This doesn't usually happen in a distributed
>>	environment. 

>- A central machine going down stop all work in all departments. If my
>  workstation quits I alone have a problem.

It doesn't matter whether your personal machine goes down or if the
central machine does. YOU still can't get any work done. But central machines
will be down less than personal machines, because so many good people are trying
to keep them up. So total machine downtime for YOU will be less under a 
centralized system. The only difference between centralized and distributed
system downtime is that under a centralized system, downtime hits
everybody at the same time, while under a distributed system downtime hits
people at different times.

>>	Maintenance: More things go wrong with many little machines than
>>		     with few big ones, because there are so many more machines 
>>	around to fail. Repair bills? Repair time? For example, I'd be 
>>	surprised if the repair/maintenance cost for a set of 100 little SCSI 
>>	drives on 100 different workstations is less than for half-a-dozen 
>>	big SMD drives on one or two big machines, per year.

>- It is. You throw them away and take a new one.

Ok, if that's the case, throw 100 little SCSIs on a big machine. Or even 
make the 'machine' operated by the centralized authority a network of
interconnected workstations, all with little SCSIs. There's nothing
stopping a central site from doing that. The central site is free to take
the cheapest route. The distributed system is forced to go the SCSI route.
If SCSI is cheaper, than the centralized system is no worse than the
distributed system. If SMD is cheaper, than the centralized system is better
than the distributed system. Either way, you can't lose with the centralized
system.

>>	Compatibility: If group A gets machine type X and group B gets machine
>>		       type Y, and they subsequently decide to work together
>>	in some way, who is going to get A's Xs and B's Ys talking together? 

>- Know of one Killer Micro that does NOT run a brand of Unix with TCP/IP and
>  NFS on it? (even Macintoshes and PCs support these protocols nowadays)

That's often not good enough. How about source or binary compatibility? 
Ever try to port a C program with all sorts of null dereferences from a Vax to
a 68k machine? It's tough for Joe in Graphics to work with Mary in AI on a
joint project if she does everything on a Sun, but he does everything on a 
Personal IRIS.

>  If >20% of your clients know as much as you do it's a losing game.
Even if many users are knowledgeable, a centralized facility still can make
sense.

>  A central facility can't adapt as easily to the wishes of their clients.
Sure they can. If the users want some resource, then buy it for them. It's
as easy as that. A responsive central facility can be amazingly adaptive.

>Ge' Weijers                                    Internet/UUCP: ge@cs.kun.nl
>Faculty of Mathematics and Computer Science,   (uunet.uu.net!cs.kun.nl!ge)
>University of Nijmegen, Toernooiveld 1         
>6525 ED Nijmegen, the Netherlands              tel. +3180612483 (UTC-2)

peter@ficc.uu.net (Peter da Silva) (11/29/89)

Maybe you get your petty tyrants at Universities, but out here in the real
world the tyrants are the users. All they have to do to get the computer
center into hot water is say the three magic words "I'M NOT WORKING".
-- 
`-_-' Peter da Silva <peter@ficc.uu.net> <peter@sugar.lonestar.org>.
 'U`  --------------  +1 713 274 5180.
"The basic notion underlying USENET is the flame."
	-- Chuq Von Rospach, chuq@Apple.COM 

quiroz@cs.rochester.edu (Cesar Quiroz) (11/29/89)

Hmmm.  I have been thinking about the ongoing disagreement with John
DiMarco.  Two things.  First, one problem seems to be that he keeps
proposing technological reasons for centralization (worse, for
centralization of authority, not just of service) whereas the other
side keeps telling him in subtle ways that the problem isn't
technological, but political/sociological.  So, let's try again:

    THE PROBLEM IS NOT TECHNOLOGICAL.

The second difficulty is that the opposition central v. distributed
is a continuum.  Successful solutions depend on centralizing what
needs to be centralized (do you want to contract for your own power
lines? or LANs?), but nothing else.  (You wouldn't want to walk to
the warehouse every morning to check out a keyboard, to be returned
at 5:00, now would you?)

In this posting I will deal only with the first source of
difficulty, the nature of the problem (technical or political?).  I
claim that statements of the form 

    ``If <x does the right thing> Then <y wins>'' 

can't persuade people who know that x rarely ever does the right
thing.  For instance:

(jdd)
| Power isn't necessarily a bad thing. If those with power (i.e. control over
| the computing resources) use this power to serve the users' needs, then 
| everybody is happy. If a centralized computing authority does not serve
| the users' needs, then it is not fulfilling its intended role. ...

First, what is seen as the intended role of the centralized
_authority_ may surprise you.  Often it is to end the fiscal year
under budget, or with billings that exceed the cost of the center.
That they should, in an ideal world, do otherwise, doesn't help the
users whose work moves like cold molasses.  The ideal behavior of
such centrals is not a question here; the actual or perceived
performance of them is.  For instance, on the issue of maximum
single point use:

(jdd)
| If everyone is being served from the same big pot, what one group doesn't
| use is available to everyone else. 

If ...

In my experience (already > 6 years stale) the first thing a central
authority does is impose quotas.  [OK, show of hands, how bad is
this, out there in the real world?]  Once you are over quota, it
doesn't matter how many free resources there still are.  Remember,
once you centralize power there will be someone with nothing better
to do than to invent rules and invest disproportionate amounts of
effort to enforce them.

-- 
                                      Cesar Augusto Quiroz Gonzalez
                                      Department of Computer Science
                                      University of Rochester
                                      Rochester,  NY 14627

jonah@db.toronto.edu (Jeffrey Lee) (11/29/89)

quiroz@cs.rochester.edu (Cesar Quiroz) writes:

>    THE PROBLEM IS NOT TECHNOLOGICAL.

Part of it is, part of it isn't.

All we can say is that the centralized horsepower seems to work better
for us.  But then we have an *unusual* setup.  Let me try to describe
it for you.

Our department is rather large.  Our computing resources are broken
into mainly area/project related *pools* of equipmemnt.  Most of the
pools are subsidiary to one of two camps.  (Nothing like a little
competition.)  Each of the main camps has large computing facility
in addition to their support of the splinter groups.

One of the camps has a predominantly decentralized (fileserver plus
workstation) model.  General users are only allowed to login to
workstations leaving the file servers to manage NFS requests plus one
other task (mail, nameservice, news, YP, etc.)  Performance used to be
miserable in the middle of the day with constant
NFS-server-not-responding messages and *slow* response for swapping.
This has improved somewhat recently due to changes in the setup of our
news system.  [Thanks Geoff and Henry.]

The other camp has workstations and X-terminals.  It allows users to
login to one of two combination file/compute server machines.  The
workstations are mostly used to run an X server with remote xterms for
login sessions.  All CPU time is charged at the SAME hourly rate to
encourage users to run their jobs on the fastest machines.  The
reason:  swapping over NFS puts a higher load on the servers than
running the same jobs on the central machines.  Also, more gain can be
had from adding 32MB to each server than adding 4MB to each
workstation.

[We also have access to a more traditional centralized computing
service, which is largely ignored by most members of the department.]

Both camps are relatively open on quotas: they are imposed from below.
You (or your sponsor) are charged for your CPU, modem/workstation
connect time and disk usage.  When your disk partition fills up, you
can't create any more files.  [Any everyone in your partition will gang
up on you if you are at the top of the disk usage list.]  CPU and
connect charges are regulated by your sponsor (who complains when your
bill gets to high).  [Terminal connections are not charged for connect
time and minimal-priority jobs get cut rate CPU.]  The rates are scaled
to cover costs and help expand the facilities--not to make money.  Both
camps provide centralized backup, system administration, and
expertise.

Both camps are expanding their facilities.  The centralized computing
camp is planning to add more memory to one of the central machines
which is starting to get bogged down with swapping.  They are also
planning to buy more X-terminals to improve accessibility.  The
distributed camp is waffling on adding memory to each of their many
workstations.  A new four-processor CENTRALIZED COMPUTE server is also
now coming on stream in EACH of the two camps.

The net result: our experience show that (our brand of) centralized
computing is a win over diskless-workstations.

j.

Disclaimer: mileage may vary...

quiroz@cs.rochester.edu (Cesar Quiroz) (11/29/89)

In <1989Nov28.204639.11237@jarvis.csri.toronto.edu>, jonah@db.toronto.edu (Jeffrey Lee) wrote:
| quiroz@cs.rochester.edu (Cesar Quiroz) writes:
| 
| >    THE PROBLEM IS NOT TECHNOLOGICAL.
| 
| Part of it is, part of it isn't.

Quite true.  I should have said the problem is not JUST
technological, but that it has a substantial political component.

Thanks for the description of your environment, I think it is indeed
quite unusual to have side by side the two approaches.  But
certainly both styles you describe have a blend of centralization
and distribution: what changes is what gets distributed.  I must
confess that the idea of centralization that I had in mind coincides
with the way the other service (the one your people mostly ignore)
most likely works.

Here we have (mainly) a bunch of SPARCstations with Sun3 and Sun4
servers.  I wouldn't call it a extremely distributed environment: we
are still vulnerable to double-point failures.  But we have the
convenience that all our personal/project files are accessible
(essentially) from any of the machines.  And everybody has an
account good for every machine.  No quotas, except for the social
pressure not to waste disks that you share (and people will complain
if you begin taking all the available cycles).  No funny-money
billing from a computer center--you use the resources you need, you
never run out of cpu or printing capacity just because you ran out
of funny money.  

We have a very competent (if slightly overworked and occasionally
abused) staff that keeps the place together and working; and the
most experienced users have traditionally taken to themselves to
help the newcomers or more casual users, so there is no visible
shortage of wizardry when that is needed.  I would say that it has
less centralized authority than the ones you described, including
the one you called decentralized (and, I must say, I find ours a
most productive environment precisely because of the absence of
interminable chains of permission and limitation).

Also, I think we are happier now with the SPARCs and their local
disks than with the never-lamented 3/50s (I always considered insane
to page over the network, and doubly so to have machines that HAD to
page all the time).  The balance of work is much better these days:
the workstation I and my 2 officemates share is powerful enough to
do a lot of work by itself, without bothering the fileservers or the
network.  These tweaks are technology, the actual driving spirit is
political:  the will to share the resources without too much
bureaucracy.

Anyway, that was a long digression.  My point is this:  your
experience shows that, of the two degrees of centralization that you
have tried, the one you call `centralized' worked for you better
than the other.  From there to conclude with DiMarco et al. that
this experience is so generally valid that workstations are just a
fad, or that centralization wins always, there is a tremendous leap
of faith.

( Aren't we converging?  This could be a most unusual phenomenon :-)
-- 
                                      Cesar Augusto Quiroz Gonzalez
                                      Department of Computer Science
                                      University of Rochester
                                      Rochester,  NY 14627

quiroz@cs.rochester.edu (Cesar Quiroz) (11/29/89)

In <1989Nov29.031045.14612@cs.rochester.edu>, I went and said:
| My point is this:  your experience shows that, of the two degrees
| of centralization that you have tried, the one you call
| `centralized' worked for you better than the other.  From there to
| conclude with DiMarco et al. that this experience is so generally
| valid that workstations are just a fad, or that centralization
| wins always, there is a tremendous leap of faith.

Upon rereading the thread more carefully (it got contaminated in my
brain with the 'fad computing' one) I realized that DiMarco never
claimed quite any of the above, so I apologize for putting words in
his fingertips.  

To summarize:  If I understand him correctly, he believes in the
general superiority of centralization, given some conditions of
enlightened authority.  I believe the necessary conditions to be
quite infrequent, so I don't see any a priori advantage of
centralization, and my experience tends to support the claim that
centralizing *authority* (as opposed to service) tends to harm as
often as it helps.

-- 
                                      Cesar Augusto Quiroz Gonzalez
                                      Department of Computer Science
                                      University of Rochester
                                      Rochester,  NY 14627

schow@bcarh61.bnr.ca (Stanley T.H. Chow) (11/29/89)

jdd@db.toronto.edu (John DiMarco) wrote:
> A centralized authority -- if it is responsive to the needs of its users --
> has the capability to offer better facilities and support at a lower price.

quiroz@cs.rochester.edu (Cesar Quiroz) writes:
>Centralized resources are naturally seen as loci of power, not as
>sources of service.  If you can fix the human tendency to build
>empires, you can make centralized resources more palatable to those
>who could otherwise prefer to not to be bothered with asking you for
>permissions.  

This gets to the crux of the matter. It is a preference as opposed to a 
technical reason. In that case, the discussions about Killer Micro vs central
resources should move to a soc group or some economic management group. In 
any case, I submit that there are very responsive "Computing Centers". You
may or may not have had the pleasure of working with one, but that does not
address the question of whether they are actually better.

jdd@db.toronto.edu (John DiMarco) wrote:
>Power isn't necessarily a bad thing. If those with power (i.e. control over
>the computing resources) use this power to serve the users' needs, then 
>everybody is happy. If a centralized computing authority does not serve
>the users' needs, then it is not fulfilling its intended role. Unless the
>members of this authority use their 'power' for the good of their users, THEY
>ARE NOT DOING THEIR JOBS. Judging from some of these postings, there seems
>to be quite a few centralized computing authorities who are not doing their
>jobs.

There is also the problem of many different users. Even in a small 
organization, there are many users of different sophistication working on
many different problems. Each will perceive the exact same service as good,
bad, or indifferent.

I suspect that people who want Killer Micros are the same people who want
Unix on their "own" box. They want to control the whole system and have the
knowledge (and inclination) to do so.

There are many other users who don't want to know. They don't even want to
know how many MIPS they are burning. They just want to know that the correct
"solution" comes up if they hit the right keys. I suspect this group do
not want Unix and couldn't care less about how the MIPS is packaged. 


jdd@db.toronto.edu (John DiMarco) wrote:
> 	Maximum single-point usage: If each group must purchase its own 
> 				    computing equipment, at no point in time
> 	can any group utilize more computing resources than that group owns.
> 	But in a centralized environment, the maximum amount of computing
> 	resources available to any one group increases to the total computing
> 	resources available to the centralized authority...
>
> 	Imagine if your group could use the idle cycles of the group
> 	down the hall.  Wouldn't that be nice?

quiroz@cs.rochester.edu (Cesar Quiroz) writes:
>Maybe true of highways, not true of computers.  It is not at all
>clear that just because you need extra cycles and they are free
>after sunset you will be graciously granted those.  You may have to
>beg to be permitted in the building after dark, etc...
>
>Sure.  Just cut the middleman and talk to the people down the hall.
>You wouldn't suggest imposing on them, right?  And they may want to
>have a chance at sharing your coffeemaker, or have a quick run on
>your laserprinter.  Why do you need to introduce a third,
>unproductive, party in this nice scenario?

Again, there are many different situations. 

If you know the people across the hall, and know their schedule and computing
requirements; certainly you can do some horse-trading.

What if there are 5,000 people working in the company working on a couple of
hundred of project that you have never heard of? How do you handle the 
bi-lateral trades? I would much rather have a group do all the trading 
for me. Sure, I lose some control, but I would not have spent time to
control it anyway!

Also, how do you go about justifying a new machine if you only need half
a machine and the guy across the hall also needs only half a machine? Who
gets to fight the budget battles to buy it? Who will control it? If you 
coordinate purchase and share control, you just end up with a bunch of
small "centers".

Stanley Chow        BitNet:  schow@BNR.CA
BNR		    UUCP:    ..!psuvax1!BNR.CA.bitnet!schow
(613) 763-2831		     ..!utgpu!bnr-vpa!bnr-rsc!schow%bcarh61
Me? Represent other people? Don't make them laugh so hard.

bartho@obs.unige.ch (PAUL BARTHOLDI) (11/29/89)

In article <1989Nov27.144016.23181@jarvis.csri.toronto.edu>, jdd@db.toronto.edu (John DiMarco) writes:

This discussion is quite interesting on a very practical point: What shall I
(or you) get next time I need to update my computer facilities ?  I find
some of the arguments quite to the point, but disagree with others.  I will
assume a non-tyranic central authority that is open to any solution that is
both best for users, easiest to support and cheapest ...  I will also assume
that all facilities, centralized or not, are interconnected through some 
network.
 
> 	Resource duplication: If every group must purchase its own resources,
> 			      resources which are used only occasionally will
> 	either be unavailable because no small group can afford the requisite
> 	cost (eg. Phototypesetters, supercomputers, Put_your_favourite_expen-
> 	sive_doodad_here), or be duplicated unnecessarily (eg. laser printers,
> 	scanners, ...

completely true for supercomputers and phototypesetters (see also maintenance 
bellow, but not for laser printers.  I find the small laser printers, for a
given throughput, cheaper, of better quality, of more modern technology, even
easier to use (downloaded fonts, ps etc) than the larger ones.  It is also
very nice to have a printer near to your office, and duplication means that
there is always an other printer running if the first is down.

> 	Maximum single-point usage: If each group must purchase its own 
> 				    computing equipment, ...
>       ... If you have a distributed computing environment, imagine
> 	putting all the memory and CPUs of your workstations into one massive
> 	multiprocessing machine, ...

Two points:  - I just compared the prices of large disk (>600MB) and memory
boards.  Why is it up to 5 time cheaper with faster access for a PC 
or WS than for a microVax for ewxample ?  5 independant units also means much 
better throughput.  - Massive multiprocessing machine means a lot of overhead, 
which you have to pay for!
Again, there are a lot of situations where you neeed a lot of central memory,
a lot of cpu resources, or disk resources etc. which can be provided only
from a central system, but do not underestimate its cost.

> 	Expertise: Distributed sites tend to have too many people playing
> 		   at system administration in their own little fiefdoms, 
> 	few of whom know what they are doing. (But guess who goes screaming
> 	to whom when something goes wrong...) In a centralized environment,
> 	it is much easier to ensure that the people who are in charge of
> 	the computers are competent and capable.

Expertise is a very costly and scarce resource.  At the same time I can't 
see but a tree like distribution of expertise with central root and local
leaves.  This is true even if all other facilities are centralized. Remember 
that the leaves must be feed from the root!
 	
> 	Maintenance: More things go wrong with many little machines than
> 		     with few big ones, because there are so many more machines 
> 	around to fail. Repair bills? Repair time? For example, I'd be 
> 	surprised if the repair/maintenance cost for a set of 100 little SCSI 
> 	drives on 100 different workstations is less than for half-a-dozen 
> 	big SMD drives on one or two big machines, per year.

I will almost get a new scsi drive every year for the price of the maintenance
of the larger ones (assuming same total capacity).  I would guess that the
large mass production of 'small' (are 600MB small ?) disk make them more
reliable than for the larger machines.  many small scsi drives (on different
machines) tends to have a larger throughput etc.
The software maintenance seams to me more critical.  Keeping OS uptodate in a
compatible and coherent way for 100 WS is a lot more difficult and time 
consumming than for a few larger machines.  How about multicopies of dB etc?

> These are some very good reasons to favour a centralized computing authority.

From my experience, both centralized and local facilities (including 
maintenance) are not only a fact of life but also necessary.  The real problem
is to buildup both hardware, administrative and human connections in such a way 
to minimize problems and cost while improving the user resources.  

I essentialy agree with the end of the discussion, and will not comment any 
further on it.

              Regards,      Paul Bartholdi, Geneva Observatory, Switzerland

ge@kunivv1.sci.kun.nl (Ge' Weijers) (11/29/89)

jdd@db.toronto.edu (John DiMarco) writes:

>ge@kunivv1.sci.kun.nl (Ge' Weijers) writes:

>>jdd@db.toronto.edu (John DiMarco) writes:

[Long discussion deleted]

Our centralised facility did convince the board of our university that
standardisation to DECNET protocols on our Ethernet exclusively was the
right decision.
That DECNET is slowly on it's way out (it is not even standard in Ultrix
I've heard) did not deter anyone.
The decision was reversed in the end. It meant: no Suns, Apollos, VAXes running
BSD, Macs interconnected and so on.

I just strongly object to centralised bureaucracies that think they know better
than the user, however knowledgable the user is. I do not object to a local
group serving a relatively small user community with a minimum of paperwork.
These groups should be as small as possible though.
If I can pay for the hardware and can run it myself why should anyone
have to interfere with this? If not I should go out looking for cooperation
and pay for it.

Ge' Weijers
Ge' Weijers                                    Internet/UUCP: ge@cs.kun.nl
Faculty of Mathematics and Computer Science,   (uunet.uu.net!cs.kun.nl!ge)
University of Nijmegen, Toernooiveld 1         
6525 ED Nijmegen, the Netherlands              tel. +3180612483 (UTC-2)

david@torsqnt.UUCP (David Haynes) (11/29/89)

I guess my take on this is that both centralized and decentralized computing
have their places. For example, let's look at X terminals vs. X workstations.

I don't think it will come as much of a surprise if I state that X binaries
are relatively large. For a system without shared libraries, these programs
can be anywhere from 500 Kb to 1 Mb or more. Even with shared libraries these
puppies aren't tiny. An average user may have 3 to 10 windows/icons active
on their terminal at any one time. This argues that the computing resource
driving the X terminal needs to be large and powerful. This tends to say
that a centralized system will work more effectively.

However, a little time ago a program called 'dclock' was posted to the
X sources newsgroup and nearly everyone compiled and ran this program.
It had a neat digital display that slowly scrolled the numbers whenever
the hours or minutes changed. Now, whenever the hour changed the centralized
system supporting the users just *crawled* along. It turns out that the
network traffic generated by the smooth scrolling of the numbers was
horrific and that the network was getting saturated. This tends to argue 
that some applications are best left on a dedicated workstation.

Naturally, intelligent workstations with a large centralized server would
be ideal, but other considerations (cost, data security, backup, software
maintenance, ...) also come into play. This leads nicely into the observations
that Rayan was making in his articles.

-david-
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
David Haynes			Sequent Computer Systems (Canada) Ltd.
"...and this is true, which is unusual for marketing." -- RG May 1989
...!{utgpu, yunexus, utai}!torsqnt!david -or- david@torsqnt.UUCP

mash@mips.COM (John Mashey) (11/30/89)

In article <549@torsqnt.UUCP> david@torsqnt.UUCP (David Haynes) writes:

>However, a little time ago a program called 'dclock' was posted to the
>X sources newsgroup and nearly everyone compiled and ran this program.
>It had a neat digital display that slowly scrolled the numbers whenever
>the hours or minutes changed. Now, whenever the hour changed the centralized
>system supporting the users just *crawled* along. It turns out that the
>network traffic generated by the smooth scrolling of the numbers was
>horrific and that the network was getting saturated. This tends to argue 
>that some applications are best left on a dedicated workstation.
>
>Naturally, intelligent workstations with a large centralized server would
>be ideal, but other considerations (cost, data security, backup, software
>maintenance, ...) also come into play. This leads nicely into the observations
>that Rayan was making in his articles.

Well, actually, I think there is a much more cost-effective technology for
this: it's far more distributed than any other choice; it has zero impact
on the network or central site; it is reliable; some releases include small
calculator functions and useful alarms; fonts and such are trivially chosen
by the user; it has portability than an X-terminal.




It's called a wristwatch.....

-----
More seriosuly, this does illustate that sometimes, when one trades
a centralized environment [minicomputer plus ASCII terminals] for a more
distributed one [workstations or PCs], you're kidding yourself if you
think the network isn't a central resource, and that it is HARDER to
manage than a simpler big machine.  [This is not an argument for
central machines, just an observation that one must have a clear view of
reality.]  If work can be done on an isolated machine, that's fine,
but people in organizations want to share data, and now at least part
of the system is a shared resource that needs sensible management, not
anarchy.  There's nothing that brings this home like having an Ethernet
board go crazy and start trashing a net, and have to shut everything
down, one at a time, to find it, belying the "if my system goes down,
it doesn't bother anybody" model.

How about getting this off the sociology track [because most of it came
down to "I know good compcenters" vs "I don't], and consider some of
the interesting system architectural issues, like:

1) How much load do X-window terminals put on a a host?
2) How much load do diskless nodes put on a host?
3) If the people are doing the same kind of work, does the host do more
or less disk I/O in cases 1) or 2)?  How does the Ethernet traffic differ?
4) Experiences with X-window terminals: how many can be served, doing what
kind of work, from what kind of host?
5) How do you characterize the kinds of work that people do in any of
these environments?

I'd be interested in data on any of these, as I think would many.
(MIPS is neutral, of course, since we actively sell multi-user systems
with ASCII terminals, servers, workstations, and X-window terminals,
we probably have less axes to grind than many people;
people here use a mixture of: X-window terminals, ASCII terminals,
MIPS workstations, MACS, DECstations, Suns, Apollos, and PCs as makes
sense them, NFSed with 400+ VUPS worth of servers.  Distributed systems are
wonderful:thank goodness the people who centrally wadminister this are great..)
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{ames,decwrl,prls,pyramid}!mips!mash  OR  mash@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

peter@ficc.uu.net (Peter da Silva) (11/30/89)

In article <536@kunivv1.sci.kun.nl> ge@kunivv1.sci.kun.nl (Ge' Weijers) writes:
> jdd@db.toronto.edu (John DiMarco) writes:
> >A centralized authority -- if it is responsive to the needs of its users --
Note--------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >has the capability to offer better facilities and support at a lower price.

> - I've one seen a bill for use of a VAX780 for one year with 10 people.
>   It was about $500,000.- (1987). You could buy a faster Sun-3 AND hire
>   an operator for less ($1,500,000.- in three years!)

If you can do it, they can do it. And they can fix or replace your
Sun a lot faster than you can when it breaks.

You just described a central authority *not* responsible to the needs of its
users. John's statement was "A and B implies C", whereas your response was
"A and ^B implies ^C". This may be true, but it's neither interesting or
useful.

ALSO, the rest of your arguments make another assumption: that the central
authority has a small number of large machines. They can also have (as we
do at Ferranti) a large number of small machines. All the edvantages of
distributed authority *plus* an economy of scale. And the few people who
need a big machine can get one.
-- 
`-_-' Peter da Silva <peter@ficc.uu.net> <peter@sugar.lonestar.org>.
 'U`  --------------  +1 713 274 5180.
"The basic notion underlying USENET is the flame."
	-- Chuq Von Rospach, chuq@Apple.COM 

casey@gauss.llnl.gov (Casey Leedom) (11/30/89)

  I'm sorry I just can't stay out of this.

| From: jdd@db.toronto.edu (John DiMarco)
| 
| Power isn't necessarily a bad thing. If those with power (i.e. control
| over the computing resources) use this power to serve the users' needs,
| then everybody is happy. If a centralized computing authority does not
| serve the users' needs, then it is not fulfilling its intended role.
| Unless the members of this authority use their 'power' for the good of
| their users, THEY ARE NOT DOING THEIR JOBS. Judging from some of these
| postings, there seems to be quite a few centralized computing authorities
| who are not doing their jobs.

  Listen to yourself.  Think carefully about the words.  Perform some
semantic analysis.  Figure it out.

  In answer to your implied question:  Yes.  There are a lot of badly run
Central Computing Services.  In fact, I have yet to run into one that is
run Well.  ("Well" == I can get a job done without having to spend a large
percentage of my time fighting a petty bureaucrat.)

  Your trite answer of ``Fix the bureaucracy'' is totally worthless and
naive.  It just doesn't happen.  Bureaucracies must be worked with and
around, but you don't try to change them in order to do your work.
You'll never get anything done if you wait till the bureaucracy is fixed.

  I really have very few arguments with your points about the advantages
of Centrally Provided Services.  It would have been a hell of a lot more
powerful argument if you'd painted in the other side of the picture.
Nothing is all roses.

  What you, and many others, seem to be missing is that Centralized
versus Anarchy is not a black and white issue.  There are a lot of grey
levels in between the [non-existent] extremes.  And there are levels of
bureaucracy at all levels.  Even the lowly desk top PC suffers the
bureaucracy of it's single owner.

  Why should we search for a single uniform solution to a problem (user
needs) that is multifaceted and non-uniform?

  There are some users who just want to be able to send mail and do word
processing.  For those users a nice terminal and some good software on a
centrally provided service is fine (note that I'm not saying centrally
located).

  Other users want facilities which would be difficult to integrate or
justify for a centrally maintained resource, but don't want to or can't
administer the facilities themselves.  For those users buying the
hardware they need and hiring the time from a central pool of operators
is probably the answer.

  Finally, other users have very special needs (or just a bad case of
wanting their own ball and able to aford it).  These users will buy their
own hardware and support personnel.  The only centrally provided services
these users will need are access to the network and local software archive.

  And this basically points out one of the biggest fallacies of the TOTAL
CENTRALIZED SERVICE argument and the implementations thereof: you're
trying to tell everyone that your hammer makes you see every problem as a
nail and why the hell can't the stupid users see their problems as nails
for your hammer too?

  Any good computing environment will be a mixture of various kinds and
levels of services.  Any attempt to smash every user's needs into a
single mold will just create petty bureaucracies and unhappy users.

Casey

dmocsny@uceng.UC.EDU (daniel mocsny) (11/30/89)

A psychologist friend once told me that the main factor in a person's
perception of stress level is the degree to which that person feels
out of control of a situation. Epidemiological evidence also shows
that people who are successful and wield a lot of power over the
people around them are on average happier, longer-lived, and less
susceptible to stress-related disease.

Having to appeal to a bureaucrat to obtain resources you need to
accomplish work on a daily basis is degrading and stressful. This
is perhaps doubly true for many people of technical bent, who are
often less enthusiastic about playing the socio-political games
necessary for establishing power relationships with other people.
People with a job to do and a distaste for politics will be willing
to pay an extra premium to free themselves from having to beg
every day.

Consider the analogies between transportation and computation
technologies.  In large cities individuals pay a high premium to own
and operate individual automobiles. Parking is a nightmare, traffic
congestion is a disaster, the hardware is idle most of the time and
chewing up scarce space resources. Mass transit, on the other hand,
moves 5-30 times as many people per lane of right-of-way per hour,
does not require parking, gets much fuller use of the hardware, and
(when well-run) offers convenience unmatched by the personal
automobile (the user doesn't have to maintain anything, conflict with
the law, fight traffic, look for a parking space, worry about theft,
etc.). Mass transit systems are also 5-10 times safer than private
automobiles, even when they run on exactly the same infrastructure.

But how many people prefer to drive an automobile instead of taking
mass transit? Why? The same principle is at work here as in the
decline of centralized computing. I still ride my bikes and take the
occasional bus, and live without a car, but the instant I got access
to my first lousy MS-DOS PC I never again did anything on the
University Mainframe unless I had absolutely no alternative. I'm
sure that some corporations have excellent MIS outfits, but once
people get a taste of being in control, they don't give up that
feeling readily.

Dan Mocsny
dmocsny@uceng.uc.edu

pcg@aber-cs.UUCP (Piercarlo Grandi) (11/30/89)

In article <1989Nov28.204639.11237@jarvis.csri.toronto.edu> jonah@db.toronto.edu (Jeffrey Lee) writes:
    quiroz@cs.rochester.edu (Cesar Quiroz) writes:
    
    >    THE PROBLEM IS NOT TECHNOLOGICAL.

The problem is educational and sociological/political, agreed. Too bad that
too many CPU/system architecture guys assume that system administrators out
there have the same understanding of issues as they have, and can be
expected to second guess them, and can carefully tune it which is plainly
preposterous. 

I remember reading Stonebraker, reminiscing on Ingres, and saying that he
had chosen static vs. dynamic indexes because careful analysis indicated
that would be slighlty more efficient, and then discovering that ingres DBAs
would not realize they needed to rebuild the static index after updates had
accumulated, even if this was clearly and insistently documented in the
manuals. This has made him a firm believer in foolproof, automatic self
tuning technoliogies, even if they are less effective and imply higher
overheads than those that require handcrafting, because the latter is
simply all too often not available.

And then not everything can be made foolproof:
    
    One of the camps has a predominantly decentralized (fileserver plus
    workstation) model.  General users are only allowed to login to
    workstations leaving the file servers to manage NFS requests plus one
    other task (mail, nameservice, news, YP, etc.)  Performance used to be
    miserable in the middle of the day with constant
    NFS-server-not-responding messages and *slow* response for swapping.

This because your sysadmin does not understand diskless machines, and their
performance implications. A 40 meg SCSI disc can be had for $450 mail order,
and attached to a diskless workstation to be used for swapping and /tmp (or
/private) works incredible wonders. Swapping across the net, or doing /tmp
work across the net is intolerably silly. When you edit a file on the
classic diskless you copy it over block by block to the workstation from
your user filesystem, and then back block by block to the /tmp filesytem,
across the net both ways, tipically from/to the same server as well (while
one should at least put every workstations's /tmp and swap on a different
server than that used for user files and root)...

One of the greatest jokes I have recently seen is this NFS accelerator that
you put in the *server*, that costs $6995 (reviewed in Unix Review or World,
latest issue), which is about the same as for 15 (and I would certainly not
want to put over 15 workstations on single server, as ethernets are bad for
rooted patterns of communication) 40 meg disks you can add to each of the
*workstations*, thus cutting down dramatically on network traffic and
enjoying reduced latency and higher bandwidth as well. But of course,
sysadmins all over will rush to buy it, because of course it is *servers*
that need speeding up... Hard thinking unfortunately is more expensive than
kit.

*Large* workstation buffer caches (e.g. 25% of memory) also help a lot; if
you have Sun workstations, they come with especially bad defaults, as to the
ratio of buffer headers to buffer slots, which is 8 and should be 1-2, a
crucial point that clearly Sun has misconfigured to make customers waste
memory.
    
    The other camp has workstations and X-terminals.  It allows users to
    login to one of two combination file/compute server machines.  The
    workstations are mostly used to run an X server with remote xterms for
    login sessions.

A wonderful waste of resources... Do you *really* want to have a couple of
network X transactions (and context switches, and interrupts, etc...) for
each (actually, thanks to X batching, every bunch of) typed/displayed
character, e.g. under any screen editor? Why not use the workstation for
local editing and compiles?

    All CPU time is charged at the SAME hourly rate to encourage users to
    run their jobs on the fastest machines.  The reason:  swapping over NFS
							  ^^^^^^^^^^^^^^^^^

Which, as I said before, is as inane a thing as you can possibly do...

    puts a higher load on the servers than running the same jobs on the
    central machines.  Also, more gain can be had from adding 32MB to each
    server than adding 4MB to each workstation.

Not to mention that multiple workstations waste memory in having multiple
kernels and not sharing executable images, that have tended to grow
both ever larger. On the other hand, memory is not the only bottleneck.
Multiple workstations have multiple net interfaces, disc controllers, CPUs,
whose work can overlap in real time.

    Both camps are relatively open on quotas: they are imposed from below.
    You (or your sponsor) are charged for your CPU, modem/workstation
    connect time and disk usage.  When your disk partition fills up, you
    can't create any more files.  [Any everyone in your partition will gang
    up on you if you are at the top of the disk usage list.]

Using partitions as a quota mechanism is incredibly gross. Partitions should
be as few as possible and only to group files with similar backup
requirements. Charging for resource use is the best quota mechanism you have,
because it is self adjusting (if a resource is too popular, you can use
the fees from its use to expand it). It also simplifies administration in that
users can waste disc space as they feel like, as long as they pay for it (the
alternative is the sysadmin educating the users as to how economize on disc
consumption, which only happens if the sysadmin knows more about the issue
than the users).

    Both camps provide centralized backup, system administration, and
    expertise.
    ^^^^^^^^^

As to this I have some doubts... :-(
    
    Both camps are expanding their facilities.  The centralized computing
    camp is planning to add more memory to one of the central machines
    which is starting to get bogged down with swapping.

This may not be going to help. Adding faster swapping discs should be the
answer, and in particular having multiple swapping partitions on *distinct*
and fast controllers. If the working sets of all the active programs already
fit into memory, that is. Otherwise, adding memory to keep core resident some
inactive working sets just to avoid swapping them out/in to slow
discs/controllers is less sound than improving swapping device bandwidth
altogether.
    
    The net result: our experience show that (our brand of) centralized
    computing is a win over diskless-workstations.

The net result is that people like the many contributors to this newgroup
work very hard to squeeze performance out of state-of-the-art technology by
devising clever architectures, but the single greatest problem users have as
to performance out there is that system administration is a difficult task,
and nearly nobody does it properly, whether the system is centralized or
distributed. The tipical system administrator tipically does not understand
the performance profiles of machines, much less of systems or networks, and
does not have a clue as to what to do to make the wonder toys run, except to
ask for more kit.


PS: if you want to know what I think is the most effective configuration, I
think that may be small workstations (say a 4-8 meg, 2-4 MIPS engine) with a
small swap and /tmp disc (say 20-40 megs, 25 msec.) and large buffer caches
(say 1-2 megs, with 1024-2048 buffer headers), with large compute servers
(say 8-16-32 megs, 8-16 MIPS) again with a large swap and /tmp/disc (say
80-160 megs, 20 msec.), and small workstations with very fast disc
controllers (say E-SMD/fast SCSI/IPI boards, 2-4 megs/sec, 2 of them) as file
servers with user filesystems (say 4 300-600 meg discs), and users
filesystems split across the fileservers (say 8-12 workstations per server),
and the shared parts of / and /usr replicated identically on the fileservers
to split the load.

Users would do editing and compiles on the local workstations, and run large
applications on the compute servers. I would also like some X terminals
around for users, rather than developers, or for small time developers.  A
centralized solution is second best, as it has less inherent parallelism and
resiliency, but may be somewhat easier to administer.

Setups with many diskless (a misnomer -- so called diskless really have
remote discs) workstations and a single fileserver, usually with a single
large disc, as I have seen so often, have the worst performance possible, and
terrible administrative problems, while setups with many diskful workstation
each with their independent set of discs (tipically just one) etc... don't
have great performance, and are as difficult to administer.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

desnoyer@apple.com (Peter Desnoyers) (12/01/89)

In article <40009@lll-winken.LLNL.GOV> casey@gauss.llnl.gov (Casey Leedom) 
writes:
>   Your trite answer of ``Fix the bureaucracy'' is totally worthless and
> naive.  It just doesn't happen.  Bureaucracies must be worked with and
> around, but you don't try to change them in order to do your work.
> You'll never get anything done if you wait till the bureaucracy is fixed.

If a corporate organization is too inefficient and is hurting the company, 
the problem will be fixed by either :

  1. the organizational problems will be fixed
  2. its functions will be dispersed to the current users of the
     organization's services.
  3. the company (or division) will lose market share, and in the end 
     there will be no employees to be encumbered by the bureaucracy, or 
     customers to pay for it.

For examples - (1) perhaps DEC's sales staff - haven't they been 
extensively revamping it? (2) shift from central computer centers to 
departmental or desktop machines (3) Sears. All of it.

Don't hold your breath, however. For some educational and government 
organizations (and some of their contractors) these rules may not apply. 
Also, they only hold if there is an advantage to actually having the 
organization in the first place. E.g. if 100% of your computer use is word 
processing, the best computer center in the world probably can't compete 
with desktop micros. Conversely, desktop micros probably aren't the 
appropriate vehicle for payroll processing, so only #1 and #3 apply.

                                      Peter Desnoyers
                                      Apple ATG
                                      (408) 974-4469

barmar@think.com (12/01/89)

In article <2992@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>Consider the analogies between transportation and computation
>technologies.

Good analogy.  However....

>But how many people prefer to drive an automobile instead of taking
>mass transit?

What do you think the answer would be if the automobile owner had to put
the car together himself, had to maintain it himself, and had to maintain
his own roads.  That's the situation in an anarchic workstation environment
-- every user for himself.  With a centralized organization, there are
people responsible for configuring systems, maintaining them, and
maintaining the shared resources such as networks, modems, and printers.

At our company we have nearly 200 Suns (about a dozen of which are file
and/or compute servers) and about a half-dozen Vaxes (a 785, an 8800, and
some 83xx and 6xxx).  Back in the pre-Sun days, when we had one or two
Vaxes that everyone used we developed lots of local software.  As we've
moved into the workstation environment the users still expect to find that
same environment.

At first we were using the anarchy mechanism.  The first few people to get
workstations were the kinds of people who could administer themselves.  But
now that every developer has one they generally expect everything to "just
work", and when it doesn't they want someone to complain to.  In order for
us to respond to problems we need to maintain some measure of control over
how each department uses their resources.

We're trying a middle-of-the-road solution now.  Each department has a
couple of people whose part-time responsibility is to manage that
department's resources.  However, since the central organization still has
the ultimate responsibility, we have to be able to understand their
configuration, so they can't vary their configurations too far from the
company norm.
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

jdd@db.toronto.edu (John DiMarco) (12/01/89)

This discussion has moved away from systems architecture to systems politics.

Let me reiterate my basic point:

If a centralized computing authority is responsive to the needs of its users,
it can do a better job of providing computing resources than a distributed 
setup.

Many of you seem to have never seen a centralized computing authority 
which is responsive to the needs of its users. This does not mean that 
such do not exist. We have one, as far as I can tell. (BTW, I don't work
for them, and praising them on USENET gives me no personal benefit whatsoever).

Consider this: centralized authorities need not be dictatorships imposed
by company/university/government administrations. They can be user 
co-operatives. 

In any case, I don't think further discussions about computing politics should 
be carried on in this group, since it has little to do with systems
architecture. I'll be pleased to continue this discussion by mail. 

John
--
John DiMarco                   jdd@db.toronto.edu or jdd@db.utoronto.ca
University of Toronto, CSRI    BITNET: jdd%db.toronto.edu@relay.cs.net
(416) 978-8609                 UUCP: {uunet!utai,decvax!utcsri}!db!jdd

peter@ficc.uu.net (Peter da Silva) (12/01/89)

In article <2992@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
> Having to appeal to a bureaucrat to obtain resources you need to
> accomplish work on a daily basis is degrading and stressful.

So ask your manager to do it for you. That's part of what your manager's
job is: to act as your gofer. Forcing technical people to be managers is
just wasting resources.

> People with a job to do and a distaste for politics will be willing
> to pay an extra premium to free themselves from having to beg
> every day.

That's one of the reasons technical jobs have lower salaries than managerial
jobs, and why "parallel path" career tracks top out sooner on the technical
side. People are willing to pay more (give up some salary) to stay out of
the business business.

> But how many people prefer to drive an automobile instead of taking
> mass transit?

Depends on whether the Mass Transit system is answerable to the passengers.
If they depend on fares then they *will* be responsive. If they depend on
taxes, they don't care. "We're Metro, we don't have to care". Who is your
central computing authority answerable to?

> sure that some corporations have excellent MIS outfits, but once
> people get a taste of being in control, they don't give up that
> feeling readily.

Even in a University it's possible to keep control in the user's
hands. Competing comp centers is one good way. Since you don't have
anyone in charge who cares about the users, you can't afford a monopoly.
-- 
`-_-' Peter da Silva <peter@ficc.uu.net> <peter@sugar.lonestar.org>.
 'U`  --------------  +1 713 274 5180.
"The basic notion underlying USENET is the flame."
	-- Chuq Von Rospach, chuq@Apple.COM 

yar@basser.oz (Ray Loyzaga) (12/04/89)

In article <32382@winchester.mips.COM> mash@mips.COM (John Mashey) writes:
> How about getting this off the sociology track [because most of it came
> down to "I know good compcenters" vs "I don't], and consider some of
> the interesting system architectural issues, like:
> 
> 1) How much load do X-window terminals put on a a host?
> 2) How much load do diskless nodes put on a host?
> 3) If the people are doing the same kind of work, does the host do more
> or less disk I/O in cases 1) or 2)?  How does the Ethernet traffic differ?
> 4) Experiences with X-window terminals: how many can be served, doing what
> kind of work, from what kind of host?
> 5) How do you characterize the kinds of work that people do in any of
> these environments?
> -john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
> UUCP: 	{ames,decwrl,prls,pyramid}!mips!mash  OR  mash@mips.com
> DDD:  	408-991-0253 or 408-720-1700, x253
> USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

We have been doing quite a bit of evaluation of X-terminals as an
alternative to workstations for undergraduate computing laboratories.
The testing that we did was on a Mips M120-5, with 16Mb of memory, with
9 NCD16's, 1 NCD19 and a Visual X-19.
We place 10 pgrads/hons/tutors/programmers in front of these
terminals (one ncd16 was locked in an office!), and performed
sufficient brain surgery to make them behave like second and
third year computer science students (you can guess whether
the surgery involved removal or addition of tissue!).
1. CPU load is hard to quantify, the machine stopped working
2. All our competent sun users run their applications on the fastest
   machines, this means or sun servers or (more likely) our 4 M120s.
   Distributed computing has just meant that our users just move from
   one cpu to another whenever a faster one comes along ...
   Measuring load on a sun.
   When we didn't have the M120's the sun servers were at their limit
   with the 10-12 diskless/dataless clients that they served.
3. On the basis of ethernet traffic, the analysis showed that a user
   with 5 xterms, running vi, and with an xclock running was doing about
   2-3k worth of I/O on the ethernet, this is our expected typical student
   load (we don't actually expect to use vi!).
   A sun 3/50 running suntools was generating ~20k ethernet traffic for
   the same sort of task (fast editting of pascal source and running).
   On the basis of this we concluded that the x-terms were no problem on
   the ethernet if they were just used as text-based windowing systems.
   Running a program like xmaze or other graphically oriented X applications
   caused ~30kb of traffic on the ether.
4. The X-terms need ~3Mb to be as usefull as suns (~18 xterms, fast response,
   able to store overlaid windows in the backing store etc).
   They work very well, when served on a lightly loaded, well configured
   machine, they can be very fast (some reminders of blit performance
   on a single user vax), mainly from fast window creation/deletion,
   this is especially true for the NCD19(68020 processor).
   I'd expect between 15-20 can be served by a 48Mb M120. This should be
   true for our expected student loads. We were supporting 10 once we
   grabbed some boards out of our other machines and made the test
   machine a 32Mb beast.
5. X-terminals are a very good answer for people who want a windowed
   environment for mainly textual work, or program development.
   They will last longer than the server that they are connected to, and
   will therefore end up as a much cheaper solution than workstations
   that need to be replaced or seriously upgraded every few years because
   of the insatiable memory demands of commercial Unix systems.
   Most (90%) of our users fit into this category of users, the most popular
   "graphical" program run on our suns is a home grown troff proofing
   program, this sort of activity should suit Xterminals perfectly.
   People requiring "real graphics" should have their own
   dedicated workstation, but then again that machine should have at least
   24 bit planes, 32Mb ram, and some serious disks, and > 10 mips in the
   engine room. Right now I don't think we can afford to give this to each
   of our users, so we compromise!
6. The above 5 questions should have been asked in something that
   approximates reverse order.

Notes: Each "student" load was consuming ~2-3Mb of the CPU servers' memory.
This is what cause the initial hiccup during testing. 48Mb should mean
no swapping during normal use (20 users), although 32M was showing a small
amount of swapping during the testing (10 users).
There seems to be a large amount of swap fragmentation which caused some
of the initial slow down, this could be an artifact of the way NTEXTs
are treated in RISCOS?

It is expected that the students will be using a windowing editor
(Rob Pike's sam) and Killian's debugger (pi), which should be great
win over vi and dbx. Xterminals will give us the ability to give a
windowing environment to all our users, while fitting into our
shrinking budgets (we are open to donations!!!), and allowing us to
administer the machines with the few technical staff that we have, it should
almost be like ascii terminals connected to some central machines.

Ray Loyzaga,
Basser Department of Computer Science
Sydney University
yar@cs.su.oz.au

cks@white.toronto.edu (Chris Siebenmann) (12/06/89)

pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
...
| The net result is that people like the many contributors to this newgroup
| work very hard to squeeze performance out of state-of-the-art technology by
| devising clever architectures, but the single greatest problem users have as
| to performance out there is that system administration is a difficult task,
| and nearly nobody does it properly, whether the system is centralized or
| distributed. The tipical system administrator tipically does not understand
| the performance profiles of machines, much less of systems or networks, and
| does not have a clue as to what to do to make the wonder toys run, except to
| ask for more kit.

 Does anyone? This isn't an idle question; I'm not aware of any papers
or studies on the disk access patterns of modern workstation-oriented
environments. All the disk access pattern studies I know of are
several years old, and done on old-style centralized machines that
everyone talked to via a dumb terminal. No one has really studied a
workstation plus file server environment, or an x-terminal and compute
server one. We're working on it here, and already have gotten some
surprising numbers (well, surprising to me!).

 If people are aware of such performance summaries, please send me
email with detail. In the absence of such studies, I think it's very
hard to come up with accurate speculation (it's also worth noting that
we currently have very bad tools for the average sysadmin to find and
cure performance bottlenecks; tuning a distributed system still
involves a lot of guesswork and experience).

-- 
	"I shall clasp my hands together and bow to the corners of the world."
			Number Ten Ox, "Bridge of Birds"
Chris Siebenmann		...!utgpu!{ncrcan,ontmoh!moore}!ziebmef!cks
cks@white.toronto.edu	     or ...!utgpu!{,csri!}cks

wayne@dsndata.uucp (Wayne Schlitt) (12/06/89)

[note 1:  i have added comp.windows.x to the newsgroups lines.  i
think both groups would be interested in this topic]

[note 2:  i realize that although the subject line says "X-term v.
PCs..." that most of the actual discussion has been on the pros and
cons of MIS departments.  for the most part, this is irrelevant
to me, the company i work at, and our customers. ]




i am very interest in subject of X terminals vs. PC's running X vs.
diskless workstations vs. disk based workstations and when to use each
of these systems.

over the next year or so we are going to be replacing most of the
computers on our (nonstandard and slow) lan with something that will
run X.  we are also going to have to start selling systems that run X
and our customers are very interested in low-cost systems that have
reasonably high performance.  (i.e. even if system XYZ has great bang
for the bug and super performance, they wont want it if it costs over
$20k per seat.  if it gives even "ok" performance for around $3-5k per
seat, they will jump at it.)

a little background might help.  we sell a cad package that's main
claim to fame is that we have large processing and picture creation
programs that let our customer put a little bit of information in and
get out finished cad drawings.  most of our customers use small lan's
(less than 5 seats), but networking is very important to them.  our
customers will typically spend equal amounts of time in the cpu & disk
bound programs and our graphics package, although any given person may
spend most of his/her time in only on only one of these two programs.

what i have been thinking about recommending using is a "large" main
cpu/disk server that will run the cpu/disk bound programs.  the
graphics package i am not really sure where they should run.  would X
terminals give good enough graphics performance, or would it require a
workstation of some sort?  the second question is which is better, x
terminals or pc's running X and discless vs disc based workstations?


as i see it, each system has advantages and disadvantages. 

X terminals:

  Advantages:

    * "cheap", possibly the least expensive of all.

    * everything you need to run X is already put together for you.
      you dont have to add lan cards, or install operating systems

    * all of the advantages of using a central cpu server:

      * economy of scale. it is usually cheaper to by one 4meg
        board than 4 1meg boards, and one 300 meg disc than 5 60meg
        disc's.

      * one point system maintenance.  only one system to back up.
        only one system to upgrade with a new operating system.  if
        your central system is a multiprocessor, you can add a cpu and
        everyone benefits.  if you add more memory, everyone benefits

      * shared resources.  only one copy of the operating system,
        utilities, and application programs on disc.  only one copy of
        the operating system taking up memory (as opposed to having to
        buy memory for each workstation to run an operating system).
        application programs that are being used by more than one
        person are shared in memory rather than being duplicated on
        each workstation.  since X programs tend to be large, this is
        more important than if you are running different copies of "vi".

    * one of the advantages of decentralized computing:

      * you can add "graphics power" by adding additional X terminals


  Disadvantages:

    * can you upgrade the terminal to use X11R4 (or whatever).  does
      this make a difference?

    * everything that goes to the screen must go over the lan.  highly
      graphically oriented programs may use up a lot of the lan's
      bandwidth.

    * all of the disadvantages of using a central cpu server:

      * economy of location.  a central computer may be faster and
        cheaper, but it cost time and resources to transmit
        information to and from it.  the central computer may be able
        to figure out what to draw in half the time, but it make take
        longer to get that information to the screen.

      * response time may be inconsistent.  if lots of people are
        trying to do things on the central server, it will take longer
        to get things done.

      * if the main system goes down, everyone stops working.
        personally, i dont really agree with this argument.  if the
        lan goes down, everyone stops working too.  and the bigger/more
        complicated you make each workstation, the more likely it is
        to go down.  you may end up with less _total_ downtime by
        using a central server.  i generally dont consider this to be
        a very important consideration.

      * you can't add cpu power without upgrading the "entire system".
        that is, if you use distributed computing, you and add cpu
        power by just adding another workstation and you can give one
        person a much faster workstation and give the old workstation
        to someone else.  of course, there is still the problem of
        what to do when you need to upgrade your lan, so in a sense
        you always have an "entire system" that may need upgrading...



PC's running X windows
  
  Advantages:

    * "cheap", possibly cheaper than X terminals...?

    * you can run other things besides X, such as dos programs.

    * people may already have PC's.  (this is not the case for us)

    * upgrades are definitely possible.  both with newer/faster
      hardware and with newer software.

    * for X programs that run on the central server, all of the
      advantages of centralized computing.

    * for programs that run under dos, all the advantages of
      decentralized computing.  assuming of course that you can use
      the lan for dos.  i am not sure how many dos network will work
      on the same lan as unix.  pc-nfs may be a solution.

  Disadvantages:

    * you have to put things together.  making sure the X software
      works with the graphics card, lan card and the server.  this may
      be a _major_ problem, depending on how much you know about dos,
      dos related hardware and how many different combinations you
      have. 

    * like the X terminal, everything that goes to the screen must go
      over the lan.  highly graphically oriented programs may use up a
      lot of the lan's bandwidth.

    * for X programs that run on the central server, all of the
      disadvantages of centralized computing.

    * for programs that run under dos, all of the disadvantages of
      decentralized computing.



diskless workstations
  
  Advantages:

    * cheaper than disk based workstations. 

    * you can run other things besides X, such as unix programs.

    * upgrades are almost always possible.  both with newer/faster
      hardware and with newer software.

    * X programs that do a lot of screen i/o can be run locally.  but
      if you run X programs that use a lot of disc i/o locally, you
      can bog down the network.  if your program uses both disk i/o
      and graphics, it may be hard to tell where to run them.  (our
      programs usually do one or the other, but not both)

    * for X programs that run on the central server, all of the
      advantages of centralized computing.

    * for programs that run run locally, some the advantages of
      decentralized computing.  the main difference is that disc i/o
      must still go through the central server.

  Disadvantages:

    * swap space for each workstation must be reserved on the main
      server.

    * the absolute minimum amount of memory need on a workstation is
      probably around 3 meg, for the operating system and x server.
      this doesnt include the mrmory for actually doing real work.
      you will probably need a minimum of 1 meg for each x program
      that you plan to run, including xterms and such.  our programs
      are probably going to require about 3-5meg.

      basically, you are looking at 8-16meg of memory for a diskless
      workstation, a lot of it could have been shared by multiple
      users if the programs (and operating system) were run on the
      main server.



disk based workstations
  
  Advantages:

    * this configuration will probably give you the most consistent
      response time.

    * you can run other things besides X, such as unix programs.

    * upgrades are almost always possible.  both with newer/faster
      hardware and with newer software.

    * X programs that do a lot of screen i/o can be run locally.

    * programs that need to do disc i/o that arent to shared files can
      do the disk i/o locally.  

    * for X programs that run on the central server, all of the
      advantages of centralized computing.

    * for programs that run run locally, most the advantages of
      decentralized computing.  

  Disadvantages:

    * on the surface, this appears to be the most expensive option.

    * memory requirements are going to be about the same as a discless
      workstation, and again, a lot of this memory could have been
      shared if things were run on the main server.



please let me know of any advantages or disadvantages that i have
forgotten.  also please point out anything you disagree with.
i realize that many of pros/cons are really two sides of the same
coin.  arguments in the form of "well, the pro's outweigh the cons, so
the cons are irrelevant" probably wont help much.  i am much more
interested in _why_ you think the pro's outweigh the cons. 


anyway, my _real_ concerns with each option are:

X terminals:
  * are they really the cheapest option?
  * are they really fast enough to do useful work?
  * am i going to be stuck two years from now when X11R6 comes out and
    my X terminal is X11R3 based?

pc's running X:
   * compatibility between all of the parts is a real concern.  has
     anyone actually put together a good reliable configuration?  if
     you have, please let me know.
   * will pc-nfs really work well to network the dos programs?  does
     anyone know where i can get a copy of pc-nfs?

diskless workstations:
   * will they bog down the network so much that to make them useless?
   * are they really going to be any cheaper than disc based
     workstations

disk based workstations:
   * could this really be the cheapest system when everything is
     considered? 


please mail or post your replies.  i will try to summarize the mail
responses that i get.  


thanks much for any help...


-wayne

jps@wucs1.wustl.edu (James Sterbenz) (12/09/89)

In article <89Dec5.151758est.27219@snow.white.toronto.edu> cks@white.toronto.edu (Chris Siebenmann) writes:
> Does anyone? This isn't an idle question; I'm not aware of any papers
>or studies on the disk access patterns of modern workstation-oriented
>environments. 

...

>We're working on it here, and already have gotten some
>surprising numbers (well, surprising to me!).


Sounds interesting...

Can you give us some idea of what your surprising numbers look like?

-- 
James Sterbenz  Computer and Communications Research Center
                Washington University in St. Louis   +1 314 726 4203
INTERNET:       jps@wucs1.wustl.edu                   128.252.123.12
UUCP:           wucs1!jps@uunet.uu.net

cks@white.toronto.edu (Chris Siebenmann) (12/19/89)

[My apologies for the delay in reply; I got snowed under by real work.
 All of this data is for moderately used Ultrix 2.2 and 3.1 Vaxes.]
jps@wucs1.wustl.edu (James Sterbenz) writes:
[I mention surprising numbers from workload studies.]
| Sounds interesting...
| Can you give us some idea of what your surprising numbers look like?

 The buffer cache hit rate is very high (80% during a parallel kernel
remake on a client machine, approaching 92% on a lightly used
file/compute server). Even under heavy file I/O load (said kernel
remake) name-to-inode resolution accounted for a good 45% of the
block-level activity (more lightly-loaded systems see this soar to
50-60%), with cache hit rates of 96-99%. Asynchronous buffer writes
range from 58% of writes (during the kernel remake) to 25% (average
file/compute server activity). On the other hand, delayed writes are
very worthwhile; about 75% of them are never actually written to disk
during the kernel remake (presumably due to a second write filling the
buffer up). Even during a kernel remake on an NFS-mounted partition,
less than 21% of the blocks read or written were remote ones (at the
filesystem level, about 80% of the reads and 44% of the writes were
remote).

 There's an appallingly high level of symlink reading (probably due to
the filesystem shuffling everyone is doing to accommodate multiple
architectures and diskless clients); 19% of the system-level reads
during the kernel remake were symlink reads (almost all from
name-to-inode resolution), and this soars to 45% on our lightly loaded
central machine.

-- 
	"I shall clasp my hands together and bow to the corners of the world."
			Number Ten Ox, "Bridge of Birds"
cks@white.toronto.edu		   ...!{utgpu,utzoo,watmath}!utcsri!white!cks