[comp.arch] Single user vs. shared

shj@ultra.com (Steve Jay) (03/18/90)

brooks@maddog.llnl.gov (Eugene Brooks) writes:

>The
>small size of the utilization factor completely negates the cost performance
>edge of the Killer Micro inside it.  This is not, however, an argument against
>the Killer Micros themselves.  It is an argument against single user workstations
>that spend almost ALL their time in the kernel idle loop, or the X screen lock
>display program as is often the case.

>Computers are best utilized as shared resources, your Killer Micros should
>be many to a box and sitting in the computer room where the fan noise does
>not drive you nuts.  This is where I keep MY Killer Micros.

If someone measured the time that I spend using the stapler, tape
dispenser, or pocket calculator that I have in my office, they'd
find that each sits idle 99.9...% of the time.  Does this mean that
I shouldn't have exclusive use of these items, and I should have to
go to some central facility whenever I want to staple, tape, or
calculate?

Obviously, single user work stations are not yet so cheap as to be in
the same category as staplers.  But, a $20,000 workstation dedicated
to a > $100,000/year engineer or scientist doesn't seem that outrageous.
The argument that an idle CPU is a wasted CPU becomes less and less
convincing as the cost comes down.  An idle CPU that I can use when-
ever I want, which is then 100% dedicated to me when I want it, could
be the way to optimize MY time.  Improving people productivity is the
name of the game, not improving computer utilization.

I'd be happy to have my CPU (with its maddening fan) in a remote location,
where it could share power supplies, cooling, and disk space.  But I still
want it to be mine. 

Steve Jay
shj@ultra.com  ...ames!ultra!shj
Ultra Network Technologies / 101 Dagget Drive / San Jose, CA 95134 / USA
(408) 922-0100 x130	"Home of the 1 Gigabit/Second network"

brooks@maddog.llnl.gov (Eugene Brooks) (03/19/90)

In article <1990Mar18.023523.4034@ultra.com> shj@ultra.com (Steve Jay) writes:
>If someone measured the time that I spend using the stapler, tape
>dispenser, or pocket calculator that I have in my office, they'd
>find that each sits idle 99.9...% of the time.  Does this mean that
>I shouldn't have exclusive use of these items, and I should have to
>go to some central facility whenever I want to staple, tape, or
>calculate?
The analogy people use here is comparing their car to their personal
computer.  The price tags are even comparable in this case.  The
argument does not hold water.  The car can't be switched between
users in milliseconds.  The computer is an entirely different animal.
You CAN have exclusive access to a CPU in a suitably parallel resource
composed of Killer Micros, yet efficiently share it with others.

By sharing your computer among a small group of people, large enough
to bring the utilization level up to perhaps 50%, you end up with
more computer, not less.  I do think that you should have your own
X display station, however, this can not be switched between users in
a millisecond or two.


>I'd be happy to have my CPU (with its maddening fan) in a remote location,
>where it could share power supplies, cooling, and disk space.  But I still
>want it to be mine. 
We are considering engraving users names on the cpu boards of our massively
parallel Killer Micro powered machine arriving here.  It will give them
that good feeling of ownership.  Last time I checked there were more processors
than users.  I think that we might also ask them to make the LTO payments for
the machine in return for this feeling of ownership...


brooks@maddog.llnl.gov, brooks@maddog.uucp

slackey@bbn.com (Stan Lackey) (03/19/90)

In article <52817@lll-winken.LLNL.GOV> brooks@maddog.llnl.gov (Eugene Brooks) writes:
>In article <1990Mar18.023523.4034@ultra.com> shj@ultra.com (Steve Jay) writes:
>>If someone measured the time that I spend using the stapler, tape
>>dispenser, or pocket calculator that I have in my office, they'd
>>find that each sits idle 99.9...% of the time.  Does this mean that
>users in milliseconds.  The computer is an entirely different animal.
>You CAN have exclusive access to a CPU in a suitably parallel resource
>composed of Killer Micros, yet efficiently share it with others.
>
>By sharing your computer among a small group of people, large enough
>to bring the utilization level up to perhaps 50%, you end up with
>more computer, not less.

Interesting discussion going on here.  I think though that the choice of
computing style has to be based on the workload.  The situation of a small
group (10?) running large batch style jobs vs. a large group (>25?) running
lots of small interactive jobs seems to inherently fit different models.
The model of say a publishing company with 25 writers all using desktop
publishing seems to be more suited to distributed workstations; highly
interactive, compute bound (constant reformatting, spellcheck, etc); if
this workload were centralized, it seems far more horsepower is necessary
to deal with the overheads of sharing and interconnect management, to get
the same response speed.  
-Stan

bwong@cbnewsc.ATT.COM (bruce.f.wong) (03/19/90)

In article <1990Mar18.023523.4034@ultra.com> shj@ultra.com (Steve Jay) writes:
>brooks@maddog.llnl.gov (Eugene Brooks) writes:
...
>>Computers are best utilized as shared resources, your Killer Micros should
>>be many to a box and sitting in the computer room where the fan noise does
>>not drive you nuts.  This is where I keep MY Killer Micros.
...
>If someone measured the time that I spend using the stapler, tape
>dispenser, or pocket calculator that I have in my office, they'd
...
>Obviously, single user work stations are not yet so cheap as to be in
>the same category as staplers.  But, a $20,000 workstation dedicated
>to a > $100,000/year engineer or scientist doesn't seem that outrageous.
>The argument that an idle CPU is a wasted CPU becomes less and less
>convincing as the cost comes down.  An idle CPU that I can use when-
>ever I want, which is then 100% dedicated to me when I want it, could
>be the way to optimize MY time.  Improving people productivity is the
>name of the game, not improving computer utilization.
...
>Ultra Network Technologies / 101 Dagget Drive / San Jose, CA 95134 / USA

Your company, "home of the 1 gigabit network", is making computers easier
to share. Sharing computing resources on a network should not be equated
to the bad old days of timesharing.  The computing network can be
engineered to give 100% of the processing power a $100k scientist needs
to get the work done in the proper manner. When extra computing
resources are available (fellow $100k scientist stepped out to get a
coffee and jelly donut) they will get more than 100%.

Staplers and tape dispensers can't be pushed across a wire or fiber
so sharing is very inconvienent but computing power can.  In almost
all situations it doesn't matter that the application ran on a machine
or machines that are located 1 kilometer away instead of a machine
sitting close enough for you to kick.  The only pieces of equipment that
a computer user should be allowed to physically abuse are those that are
needed for interaction with the computing network: display, keyboard,
mouse;  essentially human I/O devices.  An X terminal fits the bill.
(Also, an X terminal will become obsolete at a slower rate than a
super-mini or killer micro;  a business argument that I will not develop
here)

I think this is more of a psychological issue than a technical
or business issue.  The attitudes that I encounter when I propose such
a sharing scheme can be summed as:
	"You mean that someone will be using *MY* workstation!"
My reply:
	"Calm down, you'll be using our computing network."

(There's also a case for mips envy: mine is -----er than yours.)

Finally, the cost argument doesn't negate the advantages of sharing,
it just makes sharing cheaper.

Note that SUN trumpets: ``The Network is the Computer.''
Note also that SUN is not offering X terminals like DEC, DG, MIPSco...
-- 
Bruce F. Wong		ATT Bell Laboratories
att!iexist!bwong	200 Park Plaza, Rm 1B-232
708-713-5111		Naperville, Ill 60566-7050

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (03/20/90)

In article <52817@lll-winken.LLNL.GOV> brooks@maddog.llnl.gov (Eugene Brooks) writes:

| By sharing your computer among a small group of people, large enough
| to bring the utilization level up to perhaps 50%, you end up with
| more computer, not less.  I do think that you should have your own
| X display station, however, this can not be switched between users in
| a millisecond or two.

  The problem with sharing a computer is that someone gets to be
administrator. And that means making decisions about software and o/s
versions which will impact users. On of the nicest things about a system
of your own, even is small, is that backups happen when you want,
upgrades happen when you want (and more importantly don't happen when
you don't want), and the configuration is dedicated without compromise
to the productivity of one user.

  Work which must be shared can be on shared machines, and should be.
But work which has a well defined interface can be done on a machine
setup to make it's one user productive. For example: my boss doesn't
care what editor I use to write a report, what version of the o/s, etc.
Nor what spreadsheet or other tool I use to bash the numbers. One person
uses 1-2-3 and Word in DOS, I use MicroEMACS and an awk script, someone
else uses vi and sc.

  A shared machine is always a compromise. The administrator does not
want fifteen editors, ten spreadsheets, etc, to keep working. We hit the
problem frequently that under VMS one user needs a new o/s version to
run one thing, while another user has no budget to upgrade another
package to the new o/s, or that the upgrade just isn't available.

  Workstations and central computers both perform valuable functions in
terms of productivity, and I don't think that any central system or
network will replace the workstation, or vice versa.

-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
            "Stupidity, like virtue, is its own reward" -me

aglew@oberon.csg.uiuc.edu (Andy Glew) (03/20/90)

..> Single user workstations vs. centralized computing

I want to by a killer micro single user workstation, but that's
because I want to *own* it myself, and not have it taken away from me
when I change jobs/universities etc.  As for my day to day work I
don't mind sharing cycles, as long as I never experience any slowdown.
Fair-share schedulers are a must for any centralized copmputing
facility that expects me to pay for my portion of the system.

I am much more interested in single user dismountable mass storage in
my office than I am in a single user workstation.  Give my a floptical
disk on my desktop, connected to that centralized compute server!
Plus a color laserprinter and scanner on my desktop.  Give me anything
that I have to get up and move down the hall to use on my desktop (and
a new generation of flabby, unexercised, computer users is born).

--
Andy Glew, aglew@uiuc.edu

shj@ultra.com (Steve Jay) (03/20/90)

brooks@maddog.llnl.gov (Eugene Brooks) writes:

>You CAN have exclusive access to a CPU in a suitably parallel resource
>composed of Killer Micros, yet efficiently share it with others.

Maybe you CAN do it (and I'm not sure you can, but that's a different
argument), but will your system administrator LET you do it?

-Steve
shj@ultra.com

bzs@world.std.com (Barry Shein) (03/20/90)

The argument is that because a personal computer/wkstn is idle 99% of
the time therefore it would be better shared.

The problem is that although this argument seems great in theory, in
practice it tends to have real problems.

When people share a computer things go wrong, the biggest thing that
goes wrong is that one cannot estimate, day to day, what to expect
from the shared computer.

One day it can look up 1000 queries in an hour, the next day you only
get 10 per hour (oops, someone out there is running a CPU hog.)

One day you have lots of disk space, the next day you can't save the
file you just edited AND YOU HAVE NO CONTROL over the situation (oh,
you might have political control, but instead of just cleaning up a
few files you now have to have a Computing Resources committee
meeting.)

One day someone "out there" tickles a bug that keeps crashing the damn
thing...shouldn't happen...that and 50c *might* get you a cuppa (hey,
ya know what happens, no one even *knows* for the first week who's
crashing the thing, certainly not the guilty party, it just keeps
going down.)

And we won't talk about the Animal Farm nature of shared computers,
all pigs are equal, some pigs are, however, more equal.

	Control and predictability, real important.

Ever share a bathroom with a few people? Works in theory, hey, no one
uses the bathroom more than 30 minutes/day so sharing among 10 people
should be fine! Uh-huh. Ever share one bathroom among four or five
people? Don't work too well...

Computers are similar, sure, they're idle 99% of the time, except
never when you need them (like from 3-5PM, typically.)

Why do you think so many frantic hackers became night-owls?

Anyhow, a simple resource sharing argument is just that,
oversimplified. There certainly are resources that can be shared, but
it takes more thought to make it work right than is being presented.
Most sites can hardly put up with sharing a printer among several
people (a printer that's idle 90% of the time, I may add, but never
when you need it.)
-- 
        -Barry Shein

Software Tool & Die    | {xylogics,uunet}!world!bzs | bzs@world.std.com
Purveyors to the Trade | Voice: 617-739-0202        | Login: 617-739-WRLD

sbf10@uts.amdahl.com (Samuel Fuller) (03/20/90)

Another consideration in this discussion of single user workstations
versus shared compute servers is memory.  Which is better 16 Megabytes
of memory on 100 workstations or 1.6 Gigabytes on one server allocated
as needed to 100 X users?

-- 
---------------------------------------------------------------------------
Sam Fuller / Amdahl System Performance Architecture

I speak for myself, from the brown hills of San Jose.

UUCP: {ames,decwrl,uunet}!amdahl!sbf10 | USPS: 1250 E. Arques Ave (M/S 139)
INTERNET: sbf10@amdahl.com             |       P.O. Box 3470
PHONE: (408) 746-8927                  |       Sunnyvale, CA 94088-3470
---------------------------------------------------------------------------

shj@ultra.com (Steve Jay) (03/20/90)

bwong@cbnewsc.ATT.COM (bruce.f.wong) writes:

>Your company, "home of the 1 gigabit network", is making computers easier
>to share.

Gee, glad someone noticed.

Steve Jay
shj@ultra.com  ...ames!ultra!shj
Ultra Network Technologies / 101 Dagget Drive / San Jose, CA 95134 / USA
(408) 922-0100 x130	"Home of the 1 Gigabit/Second network"

P.S.  This is neither an offer to sell nor a solicitation of an offer
      to buy.....

yar@cs.su.oz (Ray Loyzaga) (03/20/90)

Try answering this question, "What will be the most cost effective
solution for a company (given limited system administration resources)
which employs 10 skilled computer application users. 10 killer micros
of the ~10mips class costing $25k each or 1 or 2 killer RISCS of the
50-100Mip class with X-terminals?
The upfront costs will be very close, the killer micros
being slightly cheaper.
Eg, 2xRC6280 ~$300k, 10Xterms $30k (total $330k, 10 Sparc1 $250k).
Remember you need to have enough disk and memory (32Mb)!
It is no use comparing 10 X-terminals on a 10 mip machine as opposed
to 10 10 mips workstations.

I think I would rather be on the RC6280's, they would have the benefit
of centralized backups, large memories, easier admin, better ability
to share resources and conduct group work.
Upgrades to memory/disk benefit all users, all the technical staff can
work on the few grunt boxes and they will be using the same resources
as the users. If they have really nice workstations you will find that
they will not maintain other workstations to the same level.
The users (probably engineers or similar) can concentrate on the tasks
they are being paid for rather than learning how to administer a Unix
machine.
The same argument goes for a teaching environment where the user does not
need to control the console of a system to get their work done, all they
want is the limited graphics bandwidth that a windowing terminal provides.

When the RC6280's run out of steam, you do your shopping, and buy next years
version of somebodys super-scalar multi-cpu RISC box, and no-one
need know, the X-terms can stay (what is everyone going to do with
their sun3's now that sparcstations are all the rage?
They would make great X-terms.

m1phm02@fed.frb.gov (Patrick H. McAllister) (03/20/90)

It seems to me that an important consideration missing from the discussion up
to now is display I/O bandwidth. My workstation has its display controller
sitting in its backplane and can transfer graphics information to the display
at bus speeds. I can't imagine that several users' graphical interfaces can
be run across an Ethernet at what a Mac/PC/single-user workstation user would
consider to be an acceptable speed. (Of course I don't know this for sure--I
only know that the systems people here are recommending X terminals for users
who don't do much graphics and single-user workstations for those of us who
do.)

It seems to me that the two main advantages of a single user workstation are
predictable turnaround and high display bandwidth, and that users who currently
have their own machines are not going to be happy with a shared one instead
until these considerations are addressed. I can imagine an operating system
for a multi-user machine that maximized responsiveness for interactive users
instead of overall throughput (anyone remember VM/CMS :-), and it seems to me
that providing acceptable turnaround need not require a single user workstation.
Can anybody in netland speak to the other objective: can an X terminal talking
to a remote host provide acceptable performance in running a graphical user
interface like Motif or XView and running (moderately) graphics-intensive
applications under it? (I think we can all agree that CAD requires a
dedicated workstation, but how about 3D plotting of statistical data, WYSIWYG
word processing with multiple fonts, and so forth?)

Pat

dricejb@drilex.UUCP (Craig Jackson drilex1) (03/20/90)

Many of the participants in discussion of shared computers vs single-user
computers seem to believe that there is an absolute answer.  However,
it's really a tradeoff involving the cost of redundant (possibly idle)
equipment, the cost of switching use, and the cost of being denied use
due to others using the equipment.  "Expensive" things will be shared,
whether they are computers or pieces of lab equipment.  Expensive can be
measured by comparing the cost of obtaining extra, possibly idle equipment
against the opportunity cost of waiting to use the shared equipment.
The ability of computers to switch from one task to another tends 
to reduce the opportunity cost of having to share it.

The amount of computing resources we have been willing to devote to a
single user has been rising for many years.  At one time, even the I/O
devices were shared-use: printers, card readers, keypunches.  Later,
it was found useful to devote a teletype to a user for extended periods
of time (time-sharing sessions).  Still later, it became common to
give an 8080-class computer (buried in a terminal) to each user.  Now,
we are discussing whether general-purpose workstations or X terminals
are the proper paradigm for users.  Yet most X terminals have a good deal
more computing power than the single-user workstations of 5 years ago,
and the shared-use computers of 15 years ago.  The difference is the cost.
-- 
Craig Jackson
dricejb@drilex.dri.mgh.com
{bbn,axiom,redsox,atexnet,ka3ovk}!drilex!{dricej,dricejb}

gillies@p.cs.uiuc.edu (03/21/90)

Eugene Brooks writes:
> The analogy people use here is comparing their car to their personal
> computer.  The price tags are even comparable in this case.  The
> argument does not hold water.  The car can't be switched between
> users in milliseconds.  The computer is an entirely different animal.
> You CAN have exclusive access to a CPU in a suitably parallel resource
> composed of Killer Micros, yet efficiently share it with others.

Let me point out that --
(1) The price of a killer micro CPU is not much more than a decent
    commercial electronic typewriter.  And most secretaries get their
    own typewriter... gee, I wonder why?

(2) X-windows is nowhere near the be-all and end-all of interactive
    supercomputing

I like this argument a lot:

> Written  8:35 pm  Mar 17, 1990 by shj@ultra.com in comp.arch
> If someone measured the time that I spend using the stapler, tape
> dispenser, or pocket calculator that I have in my office, they'd
> find that each sits idle 99.9...% of the time.  Does this mean that
> I shouldn't have exclusive use of these items, and I should have to
> go to some central facility whenever I want to staple, tape, or
> calculate?

The high cost of computing in the middle of this century has done
everyone a great psychological disservice.

Killer micros of today are a lot like flourescent lights -- cheap
to operate, prevalent, and expensive to turn off.  To see a machine
standing idle, when you were raised as a child to "use cycles
efficiently" is a gut-wrenching experience.  Just remember Alan Kay's
prediction:  In the future, computers will come in cereal boxes and we
will throw them away.

Aluminum was once valued as ten times more valuable than gold.  Now we
use aluminum cans daily and discard (recycle) them without a second
thought.  It looks like computer CPU's, even uniprocessor
supercomputer CPU's, will go the way of aluminum cans.



Don W. Gillies, Dept. of Computer Science, University of Illinois
1304 W. Springfield, Urbana, Ill 61801      
ARPA: gillies@cs.uiuc.edu   UUCP: {uunet,harvard}!uiucdcs!gillies

henry@utzoo.uucp (Henry Spencer) (03/21/90)

In article <2165@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
>  The problem with sharing a computer is that someone gets to be
>administrator. And that means making decisions about software and o/s
>versions which will impact users...

Yes, it's ever so much nicer to force every user to be a system administrator.
That way you get to see any particular mistake made over and over again,
instead of just once, which keeps life from getting dull.  It's particularly
exciting when networks are involved, which means that one person's mistake
can foul up everyone else, or when security is involved, which means
that one person's mistake can lose you a lot of money and work.

I really don't understand this persistent myth that several dozen amateur
system administrators are better than one professional.  If *only* the
user himself is affected, it doesn't make much difference, but that's
almost never the case in reality.

>... On of the nicest things about a system
>of your own, even is small, is that backups happen when you want,
>upgrades happen when you want (and more importantly don't happen when
>you don't want)...

No, sorry, these things don't happen when you want.  They happen when
you have time -- which is usually long after you really want -- or when
external constraints force you into it -- which is usually just when you
don't want to be bothered.  For example, few people run backups half as
often as a centrally-administered system run by professionals does.
A good many of them live to regret it.
-- 
MSDOS, abbrev:  Maybe SomeDay |     Henry Spencer at U of Toronto Zoology
an Operating System.          | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

jvm@hpfcda.HP.COM (Jack McClurg) (03/21/90)

>>You will be completely shocked to see how
>>low the processor utilization of single user work stations are.  The
>>small size of the utilization factor completely negates the cost performance
>>edge of the Killer Micro inside it.
>
>No question about it, you waste a lot of resources by keeping them
>isolated and idle.  The point here is that this isn't a technology
>decision, it's a policy decision.  The ability for the individual
>to have 100% of his local computational power available to him
>on demand is a policy widely favored by individuals.  The ability
>to get the most computation per dollar is a policy widely favored
>by central planners.
>
>No one argues that these policies are in any way compatible.  They both
>exist, and each drives a different kind of purchase decision.  Neither
>has anything to do with how you build technology.  Both have much to do
>with you how you buy it, and rather little to do with computer
>architecture, at this late date.
>
>-- Jon

I am about to break net protocol by mentioning a product, but I am sure that
there are other products from different vendors with similar functionality
which could be substituted for my company's product.

HP has a product called Task Broker which addresses the problem mentioned
above.  It can select an apropriate machine to run a task based on which
machine on the network makes the highest bid to run the task.  The mechanism
used to bid is very general and allows the owner of a workstation to have a
dedicated machine during working hours and make the machine available to others
at other times.

I only mention this because of Jon's and Eugene's statements above.  I think
that you can have a very cost effective environment with distributed
workstations.

Jack McClurg

shj@ultra.com (Steve Jay) (03/21/90)

m1phm02@fed.frb.gov (Patrick H. McAllister) writes:

>I can't imagine that several users' graphical interfaces can
>be run across an Ethernet at what a Mac/PC/single-user workstation user would
>consider to be an acceptable speed.

For some applications, it will take more network bandwidth to move
the graphics images than to move the data & programs needed to generate
the images.  For other applications, the opposite will be true.  It won't
always be the case that it's less load on the network to compute the
images locally.

Steve Jay
shj@ultra.com  ...ames!ultra!shj
Ultra Network Technologies / 101 Dagget Drive / San Jose, CA 95134 / USA
(408) 922-0100 x130	"Home of the 1 Gigabit/Second network"  

jkrueger@dgis.dtic.dla.mil (Jon) (03/21/90)

bwong@cbnewsc.ATT.COM (bruce.f.wong) writes:

>Sharing computing resources on a network should not be equated
>to the bad old days of timesharing.

It's been noted that the different goals implied by the two slogans
"mainframe on your desk" and "network at your service" represent more
differences of culture than technology.  The technology is perfectly
capable of giving you both.  The emphasis remains very different.
Both goals have merit, but the former gets more press.

Around the DC area I find the analogy compelling that a network of
roads that worked (e.g. could handle peak loads) would do more to
speed my trip from A to B than giving me a Masarati.  The analogy
may be extended that even if both were made more powerful it
wouldn't do much good if there weren't interesting and useful
places accessible by car.

-- Jon
-- 
Jonathan Krueger    jkrueger@dtic.dla.mil   uunet!dgis!jkrueger
The Philip Morris Companies, Inc: without question the strongest
and best argument for an anti-flag-waving amendment.

rogerk@mips.COM (Roger B.A. Klorese) (03/21/90)

In article <2165@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
>  The problem with sharing a computer is that someone gets to be
>administrator. And that means making decisions about software and o/s
>versions which will impact users. On of the nicest things about a system
>of your own, even is small, is that backups happen when you want,
>upgrades happen when you want (and more importantly don't happen when
>you don't want), and the configuration is dedicated without compromise
>to the productivity of one user.

...all of which, of course, presumes no connection to a network, which
requires at least as much administration as each standalone system would.
It assumes that economies of scale with regard to shared costly resources
such as peripherals are unimportant. It assumes that site licenses and
other schemes which may be dependent on running like software revisions
are unimportant.  Most important, it assumes that the productivity of one
user is more important than the total productivity of the organization.
-- 
ROGER B.A. KLORESE      MIPS Computer Systems, Inc.      phone: +1 408 720-2939
MS 4-02    928 E. Arques Ave.  Sunnyvale, CA  94086             rogerk@mips.COM
{ames,decwrl,pyramid}!mips!rogerk                                 "I'm the NLA"
"Two guys, one cart, fresh pasta... *you* figure it out." -- Suzanne Sugarbaker

sysmgr@KING.ENG.UMD.EDU (Doug Mohney) (03/21/90)

>The high cost of computing in the middle of this century has done
>everyone a great psychological disservice.

I like this!

>Aluminum was once valued as ten times more valuable than gold.  Now we
>use aluminum cans daily and discard (recycle) them without a second
>thought.  It looks like computer CPU's, even uniprocessor
>supercomputer CPU's, will go the way of aluminum cans.

Yes, but can we recycle the chips to put into brilliant toasters?

lamaster@ames.arc.nasa.gov (Hugh LaMaster) (03/21/90)

In article <M1PHM02.90Mar20113232@mfsws6.fed.frb.gov> m1phm02@fed.frb.gov (Patrick H. McAllister) writes:
>to now is display I/O bandwidth. My workstation has its display controller

This is quite correct.  This is an important consideration in what is
optimal.

>only know that the systems people here are recommending X terminals for users
>who don't do much graphics and single-user workstations for those of us who
>do.)

This is a the basic rule of thumb without exploration of your requirement.
I would state, though, that "Mac-like" can mean several things.  The X Window
System can do most Mac-like things just fine, although particular X
terminals may not be fast enough for your particular application.  But some
are fast enough for most such applications.  When people say "graphics",
they usually mean image processing, rendering of 3-D objects in full color,
etc.  If you need "graphics" in this sense, the X-Terminal approach doesn't
add up.

>Can anybody in netland speak to the other objective: can an X terminal talking
>to a remote host provide acceptable performance in running a graphical user
>interface like Motif or XView and running (moderately) graphics-intensive
>applications under it? (I think we can all agree that CAD requires a

In a word, yes.  I run that way every day, and at this minute.  The limiting
factor when I run applications like Framemaker is the speed of the server
I am using and the number of people I share it with.   But, X is *not*
the problem.  

**********************************

Comment:  I think Eugene Brooks was trying to make a point which was lost that
"Killer Micro" meant the CPU, not whether the CPU was packaged for a desktop.

Perhaps we need a new term for 15-100 VUPS desktop systems, killer micro based.

"Killer Desktops" anyone?


  Hugh LaMaster, M/S 233-9,  UUCP ames!lamaster
  NASA Ames Research Center  ARPA lamaster@ames.arc.nasa.gov
  Moffett Field, CA 94035     
  Phone:  (415)604-6117       

pjg@acsu.Buffalo.EDU (Paul Graham) (03/21/90)

gillies@p.cs.uiuc.edu writes:


|Let me point out that --
|(1) The price of a killer micro CPU is not much more than a decent
|    commercial electronic typewriter.  And most secretaries get their
|    own typewriter... gee, I wonder why?

the same reason people who do data entry all day get their own terminal.

|(2) X-windows is nowhere near the be-all and end-all of interactive
|    supercomputing

perhaps, but that doesn't mean that a nice terminal with a nice channel to
a room full of mips isn't a good way to go (rob pike makes this argument
better than i do).

|I like this argument a lot:

|> Written  8:35 pm  Mar 17, 1990 by shj@ultra.com in comp.arch
|> If someone measured the time that I spend using the stapler, tape
|> dispenser, or pocket calculator that I have in my office, they'd
|> find that each sits idle 99.9...% of the time.  Does this mean that
|> I shouldn't have exclusive use of these items, and I should have to
|> go to some central facility whenever I want to staple, tape, or
|> calculate?

|Killer micros of today are a lot like flourescent lights -- cheap
|to operate, prevalent, and expensive to turn off.  To see a machine
|standing idle, when you were raised as a child to "use cycles
|efficiently" is a gut-wrenching experience.  Just remember Alan Kay's
|prediction:  In the future, computers will come in cereal boxes and we
|will throw them away.

nice "systems", as opposed to the killer micro that drives them, are not
"cheap" just yet (but getting better every day).  what i'd like to see is
a nice mechanism that lets the x terminal find an idle workstation and
attach to it while giving some notice to users who are selecting from
a pool of workstations that that workstation is now not idle.  we have
"labs" with workstations and x terminals.  people have (or soon will have)
better access to x terminals (for various reasons) but all the xterminal
users jump on the same backend while workstations stand idle.

it may be the case that the big step needs to be in the communication
channel.  i can buy a 10 MIP cpu for my multi for 2K.  an x terminal for
1.5k (a bit more for a NeWs terminal).  my multi should soon have 128-256MB
of memory and in excess of 600MB of swap.  i'm loathe to build a facility
full of workstations so configured.  of course i work at a university, so
maybe it's just a matter of budgets. (i've recently made a similar case but
including software expense in comp.unix.questions)

sorry this doesn't have much to do with computer architecture.

ian@sibyl.eleceng.ua.OZ (Ian Dall) (03/21/90)

In article <1990Mar19.220617.26370@world.std.com> bzs@world.std.com (Barry Shein) writes:
>
>The argument is that because a personal computer/wkstn is idle 99% of
>the time therefore it would be better shared.
>
>The problem is that although this argument seems great in theory, in
>practice it tends to have real problems.
>
>When people share a computer things go wrong, the biggest thing that
>goes wrong is that one cannot estimate, day to day, what to expect
>from the shared computer.
>
>One day it can look up 1000 queries in an hour, the next day you only
>get 10 per hour (oops, someone out there is running a CPU hog.)

It is not just the mips which are being shared, it is also the code.
With a central machine you only have to find memory for your kernel,
emacs etc once. With N machines you have to find it N times. There are
significant technical advantages to a machine as has been already
pointed out to others.

Some people complained about not being able to run the software they
want on a shared machine, but I don't buy that.  So long as you have a
big enough disk quota you can run what you like.  If you don't have
the disk space buy more disks. I fail to see how attaching the disk
which would have gone with your workstation to the central machine can
not be a more effective way of getting the disk space for your
favorite application (at least you only need space for the extra
utilities, not the entire system). Instead of buying a workstation,
try offering to buy the central system a disk on the condition that
you are the only one with a quota on that disk.

The other problem seems to be that, sure people would like to be able
to use spare capacity, but they like to be guaranteed a certain
minimum number of cycles. Well, let me propose the guaranteed share
scheduler! I doubt if this is new but I'll propose it anyway!  Suppose
the total number of cycles per unit time is T, the maximum number of
users is M and the number of active users is A. Every user should get
max(T/A, T/M) cycles per unit time. The T/M is guaranteed, the T/A -
T/M is the bonus for being shared. Of course, for the convenience to
approach that of your own workstation you need T/M to be reasonably
large. One killer micro worth maybe?

The final problem is the fascist system manager. I don't know how many
of these there really are, I suspect that most are only trying to make
best use of too limited resources and that buying more resources is
the best solution. If you really do have a fascist system manager,
then sack them.

>Why do you think so many frantic hackers became night-owls?

Maybe because they want *more than* 1 workstation worth of cpu? On a
single user machine they don't have that option.

-- 
Ian Dall     life (n). A sexually transmitted disease which afflicts
                       some people more severely than others.       

shj@ultra.com (Steve Jay) (03/21/90)

henry@utzoo.uucp (Henry Spencer) writes:

> >  The problem with sharing a computer is that someone gets to be
> >administrator. And that means making decisions about software and o/s
> >versions which will impact users...
 
> Yes, it's ever so much nicer to force every user to be a system administrator.

> I really don't understand this persistent myth that several dozen amateur
> system administrators are better than one professional.

I think it's possible to have the best of both worlds...single user
workstations with the benefits of central administration, including
backups, network fiddling, etc.  Not easy, but possible, to take
most of the burden of system administration off of most users, but
still leave each user with the warm and fuzzy feeling of having
his/her own machine.

Steve Jay
shj@ultra.com  ...ames!ultra!shj
Ultra Network Technologies / 101 Dagget Drive / San Jose, CA 95134 / USA
(408) 922-0100 x130	"Home of the 1 Gigabit/Second network"


  

barmar@ (Barry Margolin) (03/21/90)

In <1990Mar21.010346.6552@ultra.com>, Steve Jay (shj@ultra.com) writes:
>henry@utzoo.uucp (Henry Spencer) writes:
>> >  The problem with sharing a computer is that someone gets to be
>> >administrator. And that means making decisions about software and o/s
>> >versions which will impact users...
>> I really don't understand this persistent myth that several dozen amateur
>> system administrators are better than one professional.
>I think it's possible to have the best of both worlds...single user
>workstations with the benefits of central administration, including
>backups, network fiddling, etc.

The original posting claimed that the benefits of single-user systems was
that system administrators don't bother the users by forcing upgrades at
inconvenient times, etc.  How do you claim they can be administered
centrally without the users noticing?  For instance, suppose there's an
automated network backup system (something we're planning on using for the
100 or so Macs on our network), but perhaps it requires a particular system
version.  How do you ensure that backups are performed without forcing
every user to go through the hassle of upgrading their systems?  What do
you do about the users who don't feel like upgrading just yet (perhaps
they haven't gotten around to getting the upgraded version of some
application so that it will work with the new system)?
--
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

usenet@nlm-mcs.arpa (usenet news poster) (03/21/90)

Lets view the net as an architecture for a moment.  What is the most 
cost effective way to provide computing/network access to a large number 
of people?  

More than a decade ago it was hardwired terminals and central minis. 
For the past decade or so, it has been local processors (micros) with 
loose network interconnections, or perhaps moderately coupled local 
processors (diskless workstations).  Now it looks like X-terminals.  
They give a reasonably good quality text and 2D graphics interface, 
don't flog the net like a diskless WS, and avoid the cost of duplicating 
disk drives etc. for each individual processor/desktop.

Question: Is this a temporary aberration or the shape of the future?

What will happen when the nets are 10x faster, disks 10x cheaper etc.

				David States, National Library of Medicine
				(usual disclaimer, views my own only)

jhallen@wpi.wpi.edu (Joseph H Allen) (03/21/90)

Sharing users is definately more effecient.  Four processors can easily share
one user.  However, when you get up to eight processors significant thrashing
begins to occur.

:)

-- 
            "Come on Duke, lets do those crimes" - Debbie
"Yeah... Yeah, lets go get sushi... and not pay" - Duke

shj@ultra.com (Steve Jay) (03/21/90)

barmar@ (Barry Margolin) writes:

>How do you ensure that backups are performed without forcing
>every user to go through the hassle of upgrading their systems?  What do
>you do about the users who don't feel like upgrading just yet (perhaps
>they haven't gotten around to getting the upgraded version of some
>application so that it will work with the new system)?

The next words in my previous article were "Not easy, but possible".

I don't claim to be able to answer these questions in all cases, but
I think "central administration, single user workstations" can be
handled most of the time.  For the specific example, you can do network
backups without requiring the same OS verison on all machines.  In fact,
you can do network backups of machines from different vendors. The key
is probably mutual trust & cooperation between the user & administrator. 
Oops, I just shot my original argument, which was that users want
single user systems because they can't get what they want from their
administrator.

The bottom line is that both central servers & single user worksations
are likely to be around for a long time.  Tastes great, less filling.

Anyway, we've kind of drifted off of a subject for comp.arch.   It's
an interesting issue.  Is there a more appropriate newsgroup for it?

Steve Jay
shj@ultra.com  ...ames!ultra!shj
Ultra Network Technologies / 101 Dagget Drive / San Jose, CA 95134 / USA
(408) 922-0100 x130	"Home of the 1 Gigabit/Second network"
 

hrich@emdeng.Dayton.NCR.COM (George.H.Harry.Rich) (03/21/90)

Speaking as a user, I find that there are applications where it's really
important for me to be together with everyone else, and others where I
feel that I can do my best job, if I'm allowed to time upgrades, select
my own software, etc., etc.  My own feeling is that the best approach is
using networked individual systems where the software and data that must be
sychronously updated is maintained on a network server, and the software which
does not need to be in synchronization sits on my workstation.

This has the advantantage that the professional administrator can stick to
dealing with general needs of the organization without messing with
special requirements of individuals, and at the same time saves me the
problem of taking forced changes and upgrades when the organization doesn't
need them.

Of course this is expensive.  But have you looked at the cost of the people
using these systems lately?

I don't think that the argument that most desktops have terribly low
average utilization levels amounts to a hill of beans.  Expensive people
have been kept waiting for inexpensive computers for a decade by
others who have been trying to optimize computer utilization rather than
the overall operational cost and effectiveness of organizations.

Regards,

	Harry Rich

Disclaimer: The ideas expressed here are my own and not necessarilly those
	of my employer (who would be glad to sell you either kind of system).

sysmgr@KING.ENG.UMD.EDU (Doug Mohney) (03/21/90)

In article <34853@news.Think.COM>, barmar@ (Barry Margolin) writes:
>?  What do
>you do about the users who don't feel like upgrading just yet (perhaps
>they haven't gotten around to getting the upgraded version of some
>application so that it will work with the new system)?

Or when the upgrades break the existing applications, and then the systems
manager doesn't have to time to fix that user's problem? 

"But it's only one person out of the company..." The greatest good for
the greatest number...? Euhhhhhh. Computing technology should be liberating,
not enslaving.

				Doug

henry@utzoo.uucp (Henry Spencer) (03/22/90)

In article <76700181@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
>(1) The price of a killer micro CPU is not much more than a decent
>    commercial electronic typewriter.  And most secretaries get their
>    own typewriter... gee, I wonder why?

Probably because typewriters don't need sysadmins, and therefore it is
cheap to buy one for each heavy user.  Computers are different.
-- 
Never recompute what you      |     Henry Spencer at U of Toronto Zoology
can precompute.               | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

rwa@cs.AthabascaU.CA (Ross Alexander) (03/22/90)

In article <1990Mar20.174931.2202@utzoo.uucp>, henry@utzoo.uucp (Henry Spencer) writes:
> Yes, it's ever so much nicer to force every user to be a system administrator.
> That way you get to see any particular mistake made over and over again,
	[ much totally correct observation edited for brevity ]
> don't want to be bothered.  For example, few people run backups half as
> often as a centrally-administered system run by professionals does.
> A good many of them live to regret it.

Yes, yes, and yes.  Amateurs, even extremely well-meaning and erudite
ones, are paid not to adminstrate their workstations but to *get their
primary jobs done* be that what may.  Backups are not their primary
job, and in the nature of things get pushed down the queue until they
fall off the bottom.  Then cometh the day of reckoning, and Lo!  there
is no backup, folks.  Guess it's time to redo it.  Sure glad we
remember everything we did :-(.

I might add operator time $ is < rocket scientist time $ by an
appreciable margin.  And the central site gets the backups done.  ( We
do a full backup of everything every working day. )

Either way you cut it (central server or distributed workstations),
you *must have* a professional administrator whose primary job is
adminstration, or the nescessary just doesn't get done.  Ad hoc
administration by uncoordinated part-timers is a recipie for chaos.

-- 
--
Ross Alexander    (403) 675 6311    rwa@aungbad.AthabascaU.CA    VE6PDQ

sl@van-bc.UUCP (Stuart Lynne) (03/22/90)

In article <500@sibyl.eleceng.ua.OZ> ian@sibyl.OZ (Ian Dall) writes:
>In article <1990Mar19.220617.26370@world.std.com> bzs@world.std.com (Barry Shein) writes:
>>

>The other problem seems to be that, sure people would like to be able
>to use spare capacity, but they like to be guaranteed a certain
>minimum number of cycles. Well, let me propose the guaranteed share
>scheduler! I doubt if this is new but I'll propose it anyway!  Suppose
>the total number of cycles per unit time is T, the maximum number of
>users is M and the number of active users is A. Every user should get
>max(T/A, T/M) cycles per unit time. The T/M is guaranteed, the T/A -
>T/M is the bonus for being shared. Of course, for the convenience to
>approach that of your own workstation you need T/M to be reasonably
>large. One killer micro worth maybe?

You have to have a scheduler that is aware of the number of users currently
requesting CPU cycles. When the number of cycles available is less than
requested divide it up via a formula where all possible users are allocated
a fixed percentage of CPU cycles (such that the total of all users
allocations add's up to 100 per cent).

When cycles are scarce you get at least your allocation. When cycles are 
available because there is no one else around (at 3:00 AM for example) you 
can get access to a MUCH larger amount of cycles.

For example if there are 50 people using a 50MIPS Killer Micro Mini
Mainframe (TM), each would be allocated 2%. During the day when *all* 50
people are in and pounding on the keyboard they would each get about 1MIPS
worth of CPU if they need it. At night two late night programmers doing big 
make's could each get 50% or 25MIPS.

The scheduler will have to factor in system overheads as well of course.

Personally I'd much rather get a guaranteed 2% of a KMMM(TM) with the
potential of using it *all* when no one else is around than to get 100% of a
much smaller machine.

-- 
Stuart.Lynne@wimsey.bc.ca ubc-cs!van-bc!sl 604-937-7532(voice) 604-939-4768(fax)

anders@penguin (Anders Wallgren) (03/22/90)

In article <268@van-bc.UUCP>, sl@van-bc (Stuart Lynne) writes:
>
>You have to have a scheduler that is aware of the number of users currently
>requesting CPU cycles. When the number of cycles available is less than
>requested divide it up via a formula where all possible users are allocated
>a fixed percentage of CPU cycles (such that the total of all users
>allocations add's up to 100 per cent).
>
>When cycles are scarce you get at least your allocation. When cycles are 
>available because there is no one else around (at 3:00 AM for example) you 
>can get access to a MUCH larger amount of cycles.
>
>For example if there are 50 people using a 50MIPS Killer Micro Mini
>Mainframe (TM), each would be allocated 2%. During the day when *all* 50
>people are in and pounding on the keyboard they would each get about 1MIPS
>worth of CPU if they need it. At night two late night programmers doing big 
>make's could each get 50% or 25MIPS.
>

Karl Marx would be proud...

ed@lupine.UUCP (Ed Basart) (03/22/90)

Obviously I am biased (due to my employment situation), but I believe
X terminals have a semi-permanent advantage over workstations.  
The real difference between a workstation and an X terminal (or in our 
ergot, network display station), is the fact that one can build a 
point product that purely runs X, avoids all the trash, and is 
consequently highly integrated and "cheaper" and sometimes "better".  
Workstations are low cost (read microprocessor-based) computers with 
a display plopped on top.  Network display stations go one step further by
first starting with the display, and wrapping "just enough" hardware
around it to get it to work effectively.

In the case of diskless workstations versus network display stations,
I think many observers would agree that battles rage on, but that
network display stations have won the war.  Diskless workstations are
an abomination and aberration that were contrived to reduce cost.  The
result is an amputated system sliced through some rather important
arteries.  Just put your sniffer on a network of diskless nodes and
watch the flow of blood that is paging traffic.

So, when one views a workstation as a general purpose platform that has
to keep getting faster and faster, with ever more memory, disc, and
floating point (remember many resources must be added to feed the
ravenous appetite of Un*x), it will cost more than 
"display-only" network display with its relatively simple operating
environment.  If workstations evolve to become more like network
display stations, then we will be back to the CISC versus RISC 
arguments that have given us all such entertaining reading here in this
forum.

As long as the workstation is a pile of boxes and add-in boards, they 
will always cost more than the corresponding X terminal.  And suppose
that soothsayers are right, costs plummit and a diskful workstation 
may cost $499 and an X terminal $475, but who cares?  As long as the 
X terminal remains dedicated to a single, simple function it will work 
better and be the desktop device of choice because it has 1 million 
less lines of code to worry about.

Borrowing a line from Eugene Brooks:

	THERE IS NO ESCAPE FROM THE ATTACK OF THE KILLER X TERMINALS

(I had to put that in caps so that folks can hear me over the roar 
of the fans in their workstations.)











-- 

Ed Basart,  350 N. Bernardo Ave., Mountain View, CA 94043, (415)694-0650
uunet!lupine!ed

linimon@nominil.lonestar.org (Mark Linimon) (03/22/90)

In article <1990Mar20.174931.2202@utzoo.uucp>, henry@utzoo.uucp (Henry Spencer) writes:
> I really don't understand this persistent myth that several dozen amateur
> system administrators are better than one professional.  If *only* the
> user himself is affected, it doesn't make much difference, but that's
> almost never the case in reality.
 
I'll have to disagree with one of Henry's implicit assumptions here, which
is that most organizations will supply such a "professional."  In my
experience with small and medium-size [engineering] companies, management
does not feel that system administration is an undertaking that requires
either time or personnel.  Given that one has some knowledge of system
administration, one will be 'volunteered' to do it.  With a centralized
system, one gets to do a whole group's worth of system administration.
With a decentralized system, one gets to do one system's worth.

Assuming that management feels that it's a zero-effort activity, one is not
going to get brownie points, extra credit, overtime, or even a thank-you
for either; in fact, may be criticized for "wasting time".  So how much free
time would one like to spend on it?

I'm not saying this is right, just common, and I speak from repeated
experience.  Make mine decentralized.

Mark
-- 
Mark Linimon / Lonesome Dove Computing Services / Southlake, Texas
  linimon@nominil.lonestar.org      ||  "I'm getting too old for this..."
    {mic, texbell}!nominil!linimon  ||    -- Guy Clark (ain't we all, Guy...)

hrich@emdeng.Dayton.NCR.COM (George.H.Harry.Rich) (03/22/90)

In article <268@van-bc.UUCP> sl@van-bc.UUCP (Stuart Lynne) writes:
>In article <500@sibyl.eleceng.ua.OZ> ian@sibyl.OZ (Ian Dall) writes:
>>In article <1990Mar19.220617.26370@world.std.com> bzs@world.std.com (Barry Shein) writes:
>>>
>
...
>Personally I'd much rather get a guaranteed 2% of a KMMM(TM) with the
>potential of using it *all* when no one else is around than to get 100% of a
>much smaller machine.
>
>-- 
>Stuart.Lynne@wimsey.bc.ca ubc-cs!van-bc!sl 604-937-7532(voice) 604-939-4768(fax)
My experience in shared environments is that I can't get that guarenteed
2%, no matter how good the scheduler is.  There is always maintenance, system
failure, etc., etc.

My relatively slow desktop workstation takes care of the small job I have
to get done in the next 10 minutes much more reliably than any shared system;
in the event of a system failure, or maintenance, both of which occur on
the desktop, I have redundancy -- i.e. borrow the desktop on the next desk.

I'll have to admit that with a different kind of work pattern, I might prefer
the really fast shared system, but for most environments availability
rather than compute power is the issue.

Regards,

	Harry Rich

Disclaimer:  Again, my ideas on this subject are my own, and not necessarily
	those of my employer.

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (03/22/90)

In article <1771@aurora.AthabascaU.CA> rwa@cs.AthabascaU.CA (Ross Alexander) writes:

| Yes, yes, and yes.  Amateurs, even extremely well-meaning and erudite
| ones, are paid not to adminstrate their workstations but to *get their
| primary jobs done* be that what may.  Backups are not their primary
| job, and in the nature of things get pushed down the queue until they
| fall off the bottom.  Then cometh the day of reckoning, and Lo!  there
| is no backup, folks.  

  This is true of PCs but not of workstations. The workstation may very
well have most of its filesystems NFS mounted on a large machine anyway,
keeping only the system files and temp local, and in any case can be
backed up by a script run from cron on a regular basis.

  We do that for 400 workstations here, and it just works. Daily
incrementals, weekly full dumps (staggered to spread load), one operator
mounting tapes on the drives. If a sysmgr sets up a good crontab once,
the system will take care of itself for the most part, and detect most
problems and send mail to the professional manager. This leaves the user
to make the choice of when (if) the o/s gets upgraded, etc. Other stuff
like updating the alias and sendmail files gets done by cron, too.

  By giving the user a modest system of his/her own, things like
mail/news/editing run at constant and predictable speed, while file,
compute, print, and {n,t,e}roff servers provide cheap shared power to
keep the cost of computing down.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
            "Stupidity, like virtue, is its own reward" -me

wallwey@boulder.Colorado.EDU (WALLWEY DEAN WILLIAM) (03/23/90)

As far as the attack of the killer X-terminals, there really is only one
reason why there even are such things---

X-Windows SUCKS---

	It's so SLOW and Such a computer HOG in terms of both memory
and computer cycles. The Unix community has found that in order to
get performance greater than a pc-XT out of most "Workstations" running 
X, that they have to off-load much of the work to an  X-terminal--Which
easily increases the cost per user another $1500 or so!

What actually you are going to see in the 90's are the attack of the
KILLER PCs running OS/2.  PC's running OS/2 have all the benifits that
people have been talking about in single user systems, but beat the
performance/$$$ ratio better than any other computing system out there!
Where else can I spend $5000, and get a machine that compares in pure
performance to the top quality Suns and other workstations that sell for
up to $30,000. For that price it even includes a LARGE hard drive that
doesn't have to be shared, a co-processor based graphics system that
compares to the best workstations, and has the largest program base of
programs in the world?  

A side note:
   Motif, probably the X windowing system that will become the true 
standard for UNIX, was actually a copy of MicroSoft Windows Look And
Feel with the added 3-d effect.  It was proposed by MicroSoft and HP (I think?)
to the OSF!  My own wimpy 10 MHz 286 running MSWindows is "snappyer" than
the $25,000 worksations running Motif that I use here at the 
University.   I can't wait until I get OS/2.  OS/2 is supposed to be
faster than even MSWindows.  You can just imagine how fast it would be
on a 386 machine compared to running UNIX and Motif on a 386 machine!  
Another feature of OS/2 that will come out next year is that it will have
a "Page Desciption Language" built in that compares with PostScript
--(why do you think APPLE corp dropped its Adobe stock and formed a
co-development agreement with MicroSoft----they saw the "writing on the wall").
This feature is completly device independent and is also used in displaying
graphics to get the the highest quality "WYSIWYG" like Next's Display
Postscript.  This means that also an OS/2 shop can use $1000 laser
printers rather than $6000 ones that a Unix shop would be required to
use to get the same quality of output!

Dean Wallwey

mikeb@ee.ubc.ca (Mike Bolotski) (03/23/90)

In article <18713@boulder.Colorado.EDU>, wallwey@boulder.Colorado.EDU
(WALLWEY DEAN WILLIAM) writes:

A whole bunch of stuff about the imminent death of UNIX workstations. 

Superb satire.  Thank you.

But just in case that message was real..
 
|> Postscript.  This means that also an OS/2 shop can use $1000 laser
|> printers rather than $6000 ones that a Unix shop would be required to
|> use to get the same quality of output!

Our Sun cluster uses an Apple LaserWriter, an HP LaserJet, and a TI printer.
Identical those used on PC's.   A serial port is a serial port.

Now can we get back to architecture discussions?

------
Mike Bolotski, Department of Electrical Engineering,
               University of British Columbia, Vancouver, Canada 
mikeb@salmon.ee.ubc.ca             | mikeb%salmon.ee.ubc.ca@relay.ubc.ca
salmon.ee.ubc.ca!mikeb@uunet.uu.net| uunet!ubc-cs!salmon.ee.ubc.ca!mikeb

peter@ficc.uu.net (Peter da Silva) (03/23/90)

> Where else can I spend $5000, and get a machine that compares in pure
> performance to the top quality Suns and other workstations that sell for
> up to $30,000.

Well, you can spend half that and get an Amiga... and given the incredible
thawball [1] of OS/2 applications and the fact that you can run DOS apps
on the Bridge card at least as well as you can under OS/2, it's probably
got it beat on applications as well.

> OS/2 is supposed to be faster than even MSWindows.

I've seen OS/2. It's no faster than MS Windows, and Windows is a dog.

[1] Thawball (n): Opposite of a Snowball, indicates a shortage. First used
in Shockwave Rider by John Brunner.
-- 
 _--_|\  `-_-' Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.
/      \  'U`
\_.--._/
      v

wallwey@boulder.Colorado.EDU (WALLWEY DEAN WILLIAM) (03/23/90)

In article <1199@fs1.ee.ubc.ca> mikeb@salmon.ee.ubc.ca writes:
>
>In article <18713@boulder.Colorado.EDU>, wallwey@boulder.Colorado.EDU
>(WALLWEY DEAN WILLIAM) writes:
>
>A whole bunch of stuff about the imminent death of UNIX workstations. 
>
I don't think there will be a death of UNIX workstations, but I think
that you will see people in the 90's buying PC's in places where UNIX
workstations commenly fill.  Unix Workstations I hope will move up to
a higher plane!  X is slow and clunky to say the least!  You can get 
decent performance on X-terminals or Expensive workstation pretty much
dedicated to a single user, but that is expensive!  How much does it
cost to set up Sun Lab with 20 workstations and good printing
facilities?

By the way MicroSoft has recently been reporting that they are selling
more copies of little O' MSWindows than Apple is selling Macintoshes!
I can guaranty APPLE corp is selling more Machintoshes, than Sun
is selling Sun Workstations!!  Granted MSwindows is not a real
operating system, but most of the people running MSWindows will be
capable of running OS/2 (a real operating system by most peoples
standards) with more memory!


>Superb satire.  Thank you.

If you really think that is satire, prove me wrong point by point----
Also when I mean Attack or the Killer PC, I'm talking about just pure
numbers.

>
>But just in case that message was real..
> 
>|> Postscript.  This means that also an OS/2 shop can use $1000 laser
>|> printers rather than $6000 ones that a Unix shop would be required to
>|> use to get the same quality of output!
>
>Our Sun cluster uses an Apple LaserWriter, an HP LaserJet, and a TI printer.
>Identical those used on PC's.   A serial port is a serial port.

Do they all produce the same quality output from all the programs you
can run on your Sun cluster----Here at CU, all of our workstations, and
our VAX cluster use Postscript to get out good quality.  If you can
really get as good of output on the HP LaserJet doing graphics and
scalable fonts as you can on a postscript printer, I and I am sure
others would like to know how.  

>
>Now can we get back to architecture discussions?

I agree this is not the place for discussions of operating systems, or
X or even shared vs single user systems except in the context of arch.
Let's move this discussion to mail or another News Group.

>
>------
>Mike Bolotski, Department of Electrical Engineering,
>               University of British Columbia, Vancouver, Canada 
>mikeb@salmon.ee.ubc.ca             | mikeb%salmon.ee.ubc.ca@relay.ubc.ca
>salmon.ee.ubc.ca!mikeb@uunet.uu.net| uunet!ubc-cs!salmon.ee.ubc.ca!mikeb

The above is not a flame--And no solution is perfect or acceptable in
all situations, but I think via PC's "workstation" environments are
going to be seen a lot more!

Dean Wallwey

wallwey@boulder.Colorado.EDU (WALLWEY DEAN WILLIAM) (03/23/90)

In article <18713@boulder.Colorado.EDU> wallwey@boulder.Colorado.EDU
(WALLWEY DEAN WILLIAM)  I write:
>Where else can I spend $5000, and get a machine that compares in pure
>performance to the top quality Suns and other workstations that sell for
>up to $30,000.

I should have said:
Where else can I spend $6500, and get a machine at street prices that
copmares about the same in performance benchmarks as medium quality Stand-Alone
Suns and other workstations that you are likely to find on a deskTop.

(look at Byte's latest Unix benchmaks---Primary the Everex System----
You can get clone systems that are give 98% of the speed of the
Everex and cost $4000.  All that need be added are the co-processor 
graphics system,
the hard-drive and the operating system----Easily can be done for under
$2500)

I admit I did get a little carryed away in my original posting--But I do
still stand by my general view of X running at least Motif.... I was
actually really looking forward to seeing Motif on our machines until
I saw its performance. It was only when I saw Motif run on a DecStation
3100 configured almost identically to the ~$39,000 Reviewed model in Byte
a couple of months ago, that I have seen what I consider acceptable
(In this case- much better than acceptable---blinding) implementation.
Yet the point remains, X running Motif is expensive for the performance
that it actually yeilds!

Dean Wallwey

sl@van-bc.UUCP (Stuart Lynne) (03/23/90)

In article <295@emdeng.Dayton.NCR.COM> hrich@emdeng.UUCP (George.H.Harry.Rich) writes:
}In article <268@van-bc.UUCP> sl@van-bc.UUCP (Stuart Lynne) writes:
}>In article <500@sibyl.eleceng.ua.OZ> ian@sibyl.OZ (Ian Dall) writes:
}>>In article <1990Mar19.220617.26370@world.std.com> bzs@world.std.com (Barry Shein) writes:
}>>>
}>
}...
}>Personally I'd much rather get a guaranteed 2% of a KMMM(TM) with the
}>potential of using it *all* when no one else is around than to get 100% of a
}>much smaller machine.
}>
}>-- 
}>Stuart.Lynne@wimsey.bc.ca ubc-cs!van-bc!sl 604-937-7532(voice) 604-939-4768(fax)
}My experience in shared environments is that I can't get that guarenteed
}2%, no matter how good the scheduler is.  There is always maintenance, system
}failure, etc., etc.
}
}My relatively slow desktop workstation takes care of the small job I have
}to get done in the next 10 minutes much more reliably than any shared system;
}in the event of a system failure, or maintenance, both of which occur on
}the desktop, I have redundancy -- i.e. borrow the desktop on the next desk.
}
}I'll have to admit that with a different kind of work pattern, I might prefer
}the really fast shared system, but for most environments availability
}rather than compute power is the issue.

I have two different patterns of use. The first which is also the most usual
is pretty typical, reading news :-), reading mail, editing, running small
jobs, doing miscellanous odd jobs. For this I want my guaranteed response,
but am not worried if some of them take a couple of minutes or so. 

The second pattern is to consume a great deal of CPU/IO resources. For
example checking out a very large source tree, doing a complete make,
generating a release, running test suites, etc. 

While the first type of use can be handled on virtually any environment (>80286)
the second can't unless I'm willing to wait several hours. I don't mind
scheduling it for times I know there are not too many users around. But I'd
much rather that it can be done in minutes than hours.

So I stand by my statement. For my use, 2% is great for daily use. When I 
really need to get a lot of work done I'll come in evenings when I can get
greater than 50% of the KMMM's resources for my own use.

Anyway it will be interesting to see how well this all will work. We're
getting a MIPS R3000 based machine in the next month or so. It's a tad bit
faster the Unix on a 25Mhz 386 box. Maybe I'll even try X windows finally.

-- 
Stuart.Lynne@wimsey.bc.ca ubc-cs!van-bc!sl 604-937-7532(voice) 604-939-4768(fax)

peter@ficc.uu.net (Peter da Silva) (04/08/90)

In article <009349B3.1DCCC880@KING.ENG.UMD.EDU> sysmgr@KING.ENG.UMD.EDU (Doug Mohney) writes:
> Well bud, if you can do that, howcome Commodore Sales and Marketing haven't
> gotten off their tushies to advocate these wonderful little gems?

Commodore Sales and Marketing couldn't sell a $30 Sparc-I clone. If they were
selling Sushi, they'd call it "cold, dead fish". Commodore Sales and Marketing
has taken the amazing (and they are amazing) things that have come out of
Commodore Engineering and convinced everyone it's a bigger Commodore-64. That's
howcome.

This has little to do with comp.arch anymore. Please followup elsewhere.
-- 
 _--_|\  `-_-' Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.
/      \  'U`
\_.--._/
      v

guy@auspex.auspex.com (Guy Harris) (04/08/90)

>Also, NFS on the server uses the normal caching mechanism; unfortunately
>swap blocks received from clients should not be cached, because they have
>been paged out by the client's kernel because it believes that they are
>not going to be needed again soon.

Exactly, which is why files used for NFS swapping in SunOS 4.x have the
"sticky bit" set; said bit in SunOS 4.x causes data blocks *not* to be
cached on the server....  (Said bit does not, BTW, cause any copy of
text pages to be saved on the swap area, since SunOS 4.x pages shared
stuff like text directly from the file.)

henry@utzoo.uucp (Henry Spencer) (04/08/90)

In article <3126@auspex.auspex.com> guy@auspex.auspex.com (Guy Harris) writes:
>That's *not* what I remember hearing at Rob's talk!  In fact, in a later
>hallway discussion, I said "so you'd run the editors on the 'terminal' and
>compilers on the compute server, right?" and Dennis Ritchie noted that
>if the 'terminal' used one CPU chip and the compute server used another,
>and you didn't have a cross compiler for the former running on the
>latter, you'd run the compiler on the 'terminal'....

I was probably misremembering; it has been a while.
-- 
Apollo @ 8yrs: one small step.|     Henry Spencer at U of Toronto Zoology
Space station @ 8yrs:        .| uunet!attcan!utzoo!henry henry@zoo.toronto.edu

bzs@world.std.com (Barry Shein) (04/11/90)

From: pcg@aber-cs.UUCP (Piercarlo Grandi)
>In article <8840010@hpfcso.HP.COM> dgr@hpfcso.HP.COM (Dave Roberts) writes:
>  In school we had a lab full of Sun 3/50s which were all diskless (via NFS)
>  to a server.  There were about 50 machines on an ethernet which worked
>
>Note that 50 machines to a single server is *crazy*. I would not go over a
>dozen; and even with multiple servers I think that 50+ hosts doing heavvy
>traffic on a single Ethernet requires some careful analysis.

Gee, Piercarlo, do you ever work from facts forward rather than the
other way around? He said the 50 workstations worked fine except
during peak load (finals), what else is new? Every utility on earth is
set up this way. So you say the set-up is crazy?  Why? Because it
worked? Because it offends your intuitive sensibilities?

>  ...Now a bunch of CS majors do a lot of
>  compiling when they're trying to get those projects done (very disk
>  intensive) and during this time it took forever to get anything done.
>
>NFS was designed to exchange data easily across heterogenous machines; a kind
>of automatic 'ftp'. It is being used as a file service system. Too bad that it
>offers *horrible* performance. You suffered the consequences.

I thought compiling hasn't been disk intensive for years, it's CPU
intensive. Does anyone have measurements? That doesn't stop you from
running with this lead and drawing conclusions based on it.

The truth of the matter is that you can't take upper-limit numbers
based on burst activities and derive all sorts of conclusions about
the entire system.

To be frank, I don't trust your intuitions. I'd rather see some data.

Perhaps that's rude.
-- 
        -Barry Shein

Software Tool & Die    | {xylogics,uunet}!world!bzs | bzs@world.std.com
Purveyors to the Trade | Voice: 617-739-0202        | Login: 617-739-WRLD

richard@aiai.ed.ac.uk (Richard Tobin) (04/11/90)

In article <1990Apr10.225542.13662@world.std.com> bzs@world.std.com (Barry Shein) writes:
>I thought compiling hasn't been disk intensive for years, it's CPU
>intensive. Does anyone have measurements? 

That depends what you're compiling.  If your compiling hundreds of small
programs on a Unix system with fairy little real memory, most of the time
will be spent paging cpp, ccom, c2 and ld (or equivalent).

I recently built a system which generated about a thousand C files of
about 100 lines each, then compiled them.  During compilation on a
fairly lightly-loaded multi-user Sun with 8 Mb (which took *hours*)
user cpu usage was less than 10% on average.

It's easy to produce the opposite behaviour too.  I have a C function
(the interpreter for a virtual machine) which is 2000 lines long
(including comments).  It takes about 45 minutes to compile with -O4 on
a Sun 4/260 with 32Mb.  This is almost all user cpu time.

(Aside: it takes 10 seconds to compile with gcc, which produces code
that is only 10% slower and has the advantage of being correct.)

-- Richard
-- 
Richard Tobin,                       JANET: R.Tobin@uk.ac.ed             
AI Applications Institute,           ARPA:  R.Tobin%uk.ac.ed@nsfnet-relay.ac.uk
Edinburgh University.                UUCP:  ...!ukc!ed.ac.uk!R.Tobin

fouts@bozeman.ingr.com (Martin Fouts) (04/11/90)

In article <1990Apr10.225542.13662@world.std.com> bzs@world.std.com (Barry Shein) writes:

   [Header omitted]
   From: bzs@world.std.com (Barry Shein)

   I thought compiling hasn't been disk intensive for years, it's CPU
   intensive. Does anyone have measurements? That doesn't stop you from
   running with this lead and drawing conclusions based on it.

	   -Barry Shein

   Software Tool & Die    | {xylogics,uunet}!world!bzs | bzs@world.std.com
   Purveyors to the Trade | Voice: 617-739-0202        | Login: 617-739-WRLD

Measurements.  You want measurements?  I've got *tons* of
measurements.  Reaching into my bag of amusing statistics and
selecting one which supports the contrary point of view....

I randomly selected a program I am currently working on and tried

time cc -g -c frags.c

real 0m12.66s
user 0m5.10s
sys  0m0.93s

time cc -g -o frags frags.o -lbsd -lg

real 0m3.08s
user 0m0.48s
sys  0m0.48s

Note that user+sys/real = (6.03/12.66) = 47% for compile and
(.96/3.08) = 31% for load....

Turning the optimizer on to increase the work the CPU is doing

time cc -O -c frags.c

real 0m9.90s
user 0m6.01s
sys  0m0.50s  (user+sys)/real = 6.51/9.9 = 65%

Times are on a fairly unfragmented file system on a 435 line program
on an otherwise idle workstation.

Given what I've seen from the accounting package for my average usage
over the month I would suspect that these numbers are in the ballpark.
Without further analysis I wouldn't go so far as to say that compiles
are I/O dominated but I would go so far as to say that they are *not*
cpu dominated.  (Using a ram disk might cause your numbers to vary.)

Marty
--
Martin Fouts

 UUCP:  ...!pyramid!garth!fouts  ARPA:  apd!fouts@ingr.com
PHONE:  (415) 852-2310            FAX:  (415) 856-9224
 MAIL:  2400 Geng Road, Palo Alto, CA, 94303

If you can find an opinion in my posting, please let me know.
I don't have opinions, only misconceptions.

pcg@aber-cs.UUCP (Piercarlo Grandi) (04/12/90)

In article <1990Apr10.225542.13662@world.std.com> bzs@world.std.com (Barry Shein) writes:
  
  From: pcg@aber-cs.UUCP (Piercarlo Grandi)
  >In article <8840010@hpfcso.HP.COM> dgr@hpfcso.HP.COM (Dave Roberts) writes:
  >  In school we had a lab full of Sun 3/50s which were all diskless (via NFS)
  >  to a server.  There were about 50 machines on an ethernet which worked
  >
  >Note that 50 machines to a single server is *crazy*. I would not go over a
  >dozen; and even with multiple servers I think that 50+ hosts doing heavvy
  >traffic on a single Ethernet requires some careful analysis.
  
  Gee, Piercarlo, do you ever work from facts forward rather than the
  other way around? He said the 50 workstations worked fine except
  during peak load (finals), what else is new? Every utility on earth is
  set up this way. So you say the set-up is crazy?  Why? Because it
  worked? Because it offends your intuitive sensibilities?

The setup is crazy because it collapses ungracefully under load. Almost
anything works well if it is used for a fraction of nominal; the system
engineer is the guy that makes thing work even under load.

The problem with a 50 workstation ethernet is that its knee is reached very
quickly as the more workstations become significantly active. There three
possible alternatives that do not guarantee a meltdown:

1) A single large, 50 users, machine with local fast discs, as it would not
have wire contention and network overheads.

2) 5 segments each with 10 diskless and a small server would not have wire
contention, because we expect cross segment transaction to be very rare.

3) A wire with 50 diskful workstation would not experience network
contention, nor network overheads.

It is a damn interesting research problem to find a performance profile of
each of these solutions for various loads, and a cost profile, and compare
them. It is not an interesting research problem to discuss configuration
with in-built narrow bottlenecks.
  
  I thought compiling hasn't been disk intensive for years, it's CPU
  intensive.

Tell that to Borland! Their compilers are neither... :-). Or maybe :-(.

It depends on how inefficient and stupidly built is the compiler. Based on
my impressions, I'd say that pcc derived or inspired compilers tend to be
disk traffic intensive, while those with glocal optimizers tend to be memory
intensive, and thus again usually disc traffic (paging!) intensive.

If you have infinite memory, either for caching disc blocks, or for avoiding
paging, then both types of compilers obviously tend to become CPU *bound*,
rather than intensive. Of course, if you have infinite resources, any
solution will do.

Yet, compile times are often fairly "short", and with lots of IO instead,
especially in development environments where you don't optimize but generate
large symbol tables.

  Does anyone have measurements?

Very precious few, for the distributed case. For the local case, and some
inferences, however haphazard, can be extrapolated, we have more data; the
landmark paper on disc caching by J Smith, and a few others on the
performance characterization of Unix disc access. We also have some
interesting timings for network communications (the CACM one on efficient
RPC on ethernet, even if old, the one on the galloping bits syndrome, the
Amoeba ones, etc...). All these papers are well known, I assume.

  That doesn't stop you from running with this lead and drawing conclusions
  based on it.  To be frank, I don't trust your intuitions. I'd rather see
  some data. Perhaps that's rude.

I'd like to see it as well. I know people are working on that. On the other
hand I think good arguments can be built out of known facts:

1) Ethernet has a well known problem (understatement of the decade) as soon
as average utilization gets over 30-50%.

2) The total conceivable bandwidth of an Ethernet is just over 1MB/sec, but
only when just two stations are using it, and if the receiving one can
accepts full size back to back packets without ovveruns.

3) Each network transaction takes about 3-5ms. on your typical UNIX machine
(from kernel buffer to kernel buffer); it may take much more, depending on
various misdesigns, and on whether you are instead measuring program to
program times.

4) A diskless workstation being actively used generates about 10-20KB/sec of
network traffic, and about 10/20 packets/second.

5) Many Ethernet boards and their interface software cannot sustain *input*
rates anywhere near the theoretical maximum. In particular there is a limit
to the number of packets/sec. that can be read by many machines.

6) On average, if users are doing mostly editing, one user in 10 has an
active process (but then, why ever give them a workstation each?). If they
are mostly compiling this ratio worsens substantially.


I will let the interested readers draw their own conclusions based on back of
the envelope arithmetic everybody can do.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk