[comp.arch] fad computing

rayan@cs.toronto.edu (Rayan Zachariassen) (11/25/89)

[ This article mentions no chips, no brands, no benchmarks.  It is
  about fuzzy issues as viewed from a systems management perspective.
  It is also 190 lines...  ]


Like Barry says, hardware technology is NOT the issue at this stage.

The issues (that I can remember now :-) are:

Education.

	Vendors have a tendency to ``forget'' to tell customers about
	the other half of the cost of the machine they're selling: support.
	The way it should be is to buy a slave (aka sysprog) along with
	any machine you buy, but several things conspire against awareness
	of this:

	- Many machines are sold to people that run one application, which
	  could be the O.S. of the machine for all they care.  This means
	  salescritters talk to a lot of people who really just want a
	  turnkey black box.  This means they don't keep the support issue
	  in mind when speaking to customers in other kinds of ships, and
	  certainly never bring it up... "But Mr. Jones, you realize that
	  with 10000$ to spend you really should use a few thousand on
	  proper system support, but unfortunately we don't have anything
	  you could buy for less than 9000$.  For your own good, Goodbye".
	  No way.

	- PCs (more "turnkey" machines) have given people the impression that
	  they can run their own box and that keeping a system running is
	  something that can be relegated to (literally, at times) slave labour.

	- Infrastructure is invisible.  People have to be reminded that it
	  exists, that it is necessary, and to cooperate with it.  Think
	  about your relationship with your Government and its Tax authority,
	  then scale that down to your local shop.  Does the thought give
	  you a warm fuzzy feeling that your money is being well spent on
	  necessities?  Probably not.
	
Psychology.

	People don't like other people spending their money.  In particular
	when they feel they have no authority over the process, or if they,
	personally, don't get special attention for their needs when they
	want it.  It feels like paying something for nothing, if all you're
	paying for is infrastructure.  They start viewing the infrastructure
	provider as a "them" in an arms-length relationship and look for
	ways of improving the service they get.  This typically means
	channelling $$ from large infrastructure to local equipment/labour,
	which undermines the infrastructure support and worsens the situation
	for everyone dependent on it.  Viscious circle.

	Most of such money is of course spent on equipment.  This was the
	precendent of the vax-class-minis to sun-class-workstations movement
	of a few years ago; people were moving from 1mips shared computing
	to 1.5mips on their desk, preciously guarded (ITS MINE! ALL MINE!
	GET YOUR MITTS OFF MY CPU!!!).  The effect was to go from largish
	empires that wouldn't or couldn't move quickly (largely due to
	lack of mini-class products at the time, later also due to the size
	of capital investment required), to small fiefdoms.  But people
	thought they were happy, and it *is* their money after all.

	So, they bought workstations.  They didn't need workstations.
	(I'm making sweeping statements here to make a point, I'm quite aware
	there are exceptions).  And still, today, most people don't need a
	"high-performance micro" on their desk; what they need is to get a
	job done in a productive (computer) work environment.  Nowadays that
	reads "bitmapped screen, window system, good response time".  If your
	job requires high-bandwidth communication to where you physically
	sit to do your work, then maybe having your own CPU is a reasonable
	way of fulfilling that requirement, but for NO OTHER REASON.

	Nevertheless, workstations were bought in droves for all the wrong
	reasons: status symbol, use-it-or-lose-it funding, and "wow! a
	computer on my desk; neato, now I can ignore you!".

	The end result is a LOT of compute cycles being spent in idle loops.

Economy.

	What happens if you try to fulfill the basic requirements using the
	Display Station approach (the sole purpose of the computer on your
	desk is to manage the interface to you and compute-cheap tasks),
	instead of the Work Station approach (your desktop computer does
	everything)?

	The first given then is that you save a lot of money by providing
	fancy terminals instead of computers on people's desks.  To
	maintain the productivity requirement invariant, your pour this
	money into centralized resources: CPU, I/O, Printers, etc.

	Try a rough calculation with today's prices: a display station
	is about half the price of a workstation, say.  For a 5k$ workstation
	you get 2.5k$ to beef up the central facilities if you buy a terminal
	instead.  Multiply this by the number of workstations at a largish
	site.  Now add this amount onto the existing infrastructure funds,
	and as a result you get a significant increase in shared resources.

	The advantages include:

		- there are no private resources to waste.
		- each user has the potential to use 100% of the shared
		  resources when they need to.
	
	The disadvantage that is always brought up as a counter argument
	is ROBUSTNESS.  Well, surprise surprise, a distributed environment
	is just as fragile as a centralized one with the same functionality
	(that's the consensus around here after years of observation), but
	it is MUCH more complex.  Centralized environments can be made
	very robust if well thought out.
	
	In addition to the economy of resource scheduling, there are the
	usual economies of scale inherent in an un-fragmented community.
	One might actually be able to afford decent I/O subsystems which
	cost order-of-magnitude of the main CPU.  These are all the more
	important in the context of this kind of timesharing environment,
	because of the timesharing load AND because there will always
	be people that would be I/O bound on the workstation they could
	afford for their desk (e.g. when it starts thrashing due to a large,
	theoretically compute-bound, job).
	
	As long as the shared CPU isn't idle waiting for I/O, it is effective
	and efficient computing as far as system management is concerned.
	
	If people have special needs, they buy the capability (a disk,
	another cpu, whatever), and hand it to the shared facility to
	make it available to themselves.  "If you want to use 600MB online,
	then either pay for file storage or give us a disk and you'll get
	that space".  The important part is that the disk sits with all
	the other disks, and therefore receives better attention than if
	it was on your desk -- A/C machine room, ease of service, bits are
	bits so other file space can be found if your disk died and it is
	important enough, competent management and monitoring, etc.

	If I were in that kind of environment (and I am :-), I wouldn't
	care about which resources were being used as long as they are
	1) paid for, and 2) easy to manage.  Often 2) translates into a
	variation of "how close is the problem to where I am now?".
	System staff hates running around all over the place, both physically
	and online.

Quality of Service.

	Done right, central facilities can give users a nice warm fuzzy good
	feeling.  I think the trick is to give them on-site personal attention,
	which is where centralization usually falls short.  There are solutions
	to that; the one we use here is to tell people to hire their own
	support people (aka slave labour in the University parlance).  They
	then act as liaisons between the shared facility and the specific
	users they interact with and perhaps as application specialists for the
	entire facility.  They do no infrastructure-type support.  The result
	is no duplication of work, and good service.

	But, as Barry says, "who cares" [which scheme people choose].
	Certainly I wouldn't, except for collateral effects on the shared
	facility and hence the other users.  In this situation, people really
	do get what they deserve.

	Note that none of my comments said anything about central authority,
	only central resources and resource-management.  There is a large
	psychological difference between the two concepts.

Immature Software.

	To come full circle: this whole problem is caused by the nature
	of today's computers, their software to be precise.  Each box is
	self-contained and thinks it owns the world.  This is the wrong
	premise and I think we'll need a second UNIX-like (r)evolution to
	make people realize this.

	I'd like to see a situation where people buy

		1) the interface hardware they need (capital cost)
		2) a chunk of computing resources (capital cost)
		3) system support (software/environment maintenance)

	The hardware is tangible; it would sit on their desk and connect to
	the communications infrastructure.  Most people need a fancy terminal,
	and should not have to change their equipment unless the interface
	technology changes and they NEED the new capabilities (e.g. bw->colour,
	Audio I/O, 3D digitizer, Holographic displays).  Deprecated Sun3/50s
	will work fine if you just want a screen,mouse,keyboard combination.
	(Actually, the way OS's are going, running the interface is about all
	they can do anyway :-).

	The chunk of resources is a capital cost contribution to a large
	shared "system", to be allocated within that system according to
	global needs.  That system in turn is tiered appropriately, using
	older slower technology (which is pretty fast these days) to
	control non-compute resources.  These days the depreciation period
	for hardware is 3 years, which means there is a lot of cheap capable
	computers out there, or there will be 3 years from now ;-).
	In a good system, they would be a reusable resource.


In large environments, the workstation concept is a fad.


					     rayan

		Artificial Intelligence/Numerical Analysis/Computational Theory
				    Shared Computing Facility
				      University of Toronto

jdarcy@pinocchio.encore.com (Jeff d'Arcy) (11/25/89)

rayan@cs.toronto.edu (Rayan Zachariassen):
> 	The disadvantage that is always brought up as a counter argument
> 	is ROBUSTNESS.  Well, surprise surprise, a distributed environment
> 	is just as fragile as a centralized one with the same functionality
> 	(that's the consensus around here after years of observation), but
> 	it is MUCH more complex.  Centralized environments can be made
> 	very robust if well thought out.

I'v heard this one time and time again, usually in the form "if my workstation
goes down one person is unable to work; it the central computer goes down then
nobody can work".  If each workstation is down 2% of the time and the central
computer is down 1% (unreasonably large figures, I  know), this argument falls
flat on its face.  Given that large systems are more likely than small ones to
be administered by people who know what they're doing, and also that they live
in a better environment (UPS, A/C, etc.), workstations ar probably down *more*
than twice as much as larger hosts.

There's also the issue of backups, and the possibility of a misconfigured
workstation spitting up on the network or otherwise acting antisocially.  In
general I think that a bunch of medium-sized hosts (large Vaxen, Multimaxes,
Symmetries) and a plethora of X-terms is the best way to go for *most*
environments.

Jeff d'Arcy     OS/Network Software Engineer     jdarcy@encore.com
  Encore has provided the medium, but the message remains my own

bzs@world.std.com (Barry Shein) (11/27/89)

The real robustness people are concerned about is not how much their
system goes down (these days things don't go down a whole lot, unlike
a few years ago), it's "policy" robustness that's the concern.

If I compute on a main facility then I live with *your* rules on
things like disk storage or computing resources and even whether or
not I can use the computer at all.

If I get my own system then I can make up my own rules.

Non-critical use of centralized systems is of course always safe, I
don't mind reading news or grabbing PD software from *your* system,
other things, but lines are drawn.

C'mon, what are the major obsessions at every centralized facility?

Who gets disk space, who gets an account, who loses their account
because they were naughty, who shall live and who shall die. The
next-level obsession is implementing software to enforce the first set
of obsessions.

Who cares if it's up all the time if I'm limited to a few MB of disk,
get nagged or threatened if I ever actually use some CPU to the
system's discomfort or waste some printer paper, etc?

Medium-sized processors (like you mention, Vaxes, Multimax, Symmetry,
MIPS/SUN/SGI servers etc) are wonderful things because they can
provide the services mentioned within workgroups who have common
interests. That's not really what's being discussed. I guess I'm
mostly referring to single systems over $500K.

What's being discussed are the *central* services that cross workgroup
lines and, hence, are usually administered by a third-party entity
who's only justification for existence is to manage the resources.

That's the real problem, the first thing you notice about central
services is that the people involved actually have no real use for
computers other than to beat others over the head with them. Hence
there's typically no sympathy for someone trying to actually get some
work done with the computers, the *rules* are what's important, and
some incredibly unremarkable people end up as sheriffs.

Why is this relevant to comp.arch?

Because I am arguing that the decline of the large, centralized
machine is real and being driven much more by organizational politics
than technological considerations. But the trend has technological
requirements.

To put it into another framework, sure a centralized photocopy service
center is more efficient and has faster, fancier, larger copiers which
are better maintained than anything you're likely to find in your
office.

But you still want a copier in your office because who wants to deal
with the bureaucracy, the policies or the inability to get your hands
on the machine when you're not quite sure how you want to do a job?

As the smaller machines get "good enough" (good enough is not what a
techie thinks of as "good enough" any more than a race car driver
understands how anyone could consider a 10-year old Volvo "good
enough") the centralized, shared facilities will wither away (large
machines will tend to be huge "personal computers" politically, 3090's
etc which are used by a very few people directly to get some specific,
large jobs done, like payroll or billing.)

That's why it's important to tear down the last few architectural
constraints of smaller systems (smaller probably means under $500K in
relatively smaller numbers of units, under $100K in large numbers and
under $10K in huge numbers.)

I think it's all organizational and political trends crying out for
architectural solutions, not the other way around.
-- 
        -Barry Shein

Software Tool & Die, Purveyors to the Trade         | bzs@world.std.com
1330 Beacon St, Brookline, MA 02146, (617) 739-0202 | {xylogics,uunet}world!bzs

desnoyer@apple.com (Peter Desnoyers) (11/28/89)

In article <1989Nov26.204924.24209@world.std.com> bzs@world.std.com (Barry 
Shein) writes:
> What's being discussed are the *central* services that cross workgroup
> lines and, hence, are usually administered by a third-party entity
> who's only justification for existence is to manage the resources.
> 
The biggest problems I have seen with central computing facilities have 
been when one group (computer operations) looks good (i.e. comes in under 
budget) at the expense of another group (whose schedule slips due to lack 
of computing resources).

This is just one of many cases where the wrong rules cause inefficiency or 
worse. It can happen anywhere, not just in computer operations. Office 
space and layout is another area that is often handled by people with 
objectives orthogonal to those of the actual engineers using the place.

Personally, I think that there will come a time when mainframe-type 
computers will come into vogue again. Why? Because if you concentrate N 
queues, each with server rate 1, onto a single queue with server rate N, 
your expected wait goes down by a factor of N. Some of the main reasons 
why workstations are currently popular are (IMHO):

 (1) the current $/compute horsepower curve increases faster than O(N). If 
your rate N server costs $N^2, then you are no better off than if you had 
bought a bunch of workstations. Future technologies (e.g. parallelism) may 
flatten this curve enough to give a decided advantage to large compute 
servers.

 (2) political reasons within organizations. Computer centers look out for 
their own interests, rather than the groups they provide services for. 
Budgets may be done in a way that hides the recurring costs of workstation 
maintenance, while emphasising this cost for computer center use. Etc., 
etc. If these problems are worth fixing, rather than doing away with 
altogether - well, that's what management consultants are for.

                                      Peter Desnoyers
                                      Apple ATG
                                      (408) 974-4469

news@haddock.ima.isc.com (overhead) (11/29/89)

In article <89Nov25.051946est.2233@neat.cs.toronto.edu> rayan@cs.toronto.edu (Rayan Zachariassen) writes:
>[ This article mentions no chips, no brands, no benchmarks.  It is
>  about fuzzy issues as viewed from a systems management perspective.
>
>Like Barry says, hardware technology is NOT the issue at this stage.
>The issues (that I can remember now :-) are:
>Education.
>Psychology.
>Economy.
>Quality of Service.
>Immature Software.
	All very good points.

>In large environments, the workstation concept is a fad.

I don't agree, and in only 62 more lines.  Workstations currently
have plenty of resources, and tend to deliver them cheaper than
fewer, central systems.  Expandability is cheap and
non-disruptive too.

While working for a well known University, I suggested (and we
implemented) replacing large, high maintanance 780s with uVAXen.
We treated the workstations as "mainframes", centrally
administrated, access by terminals (which we already had &
wired).  Total maintance costs were about a fifth of the previous
costs, systems downtime decreased, the upgrade costs were
defrayed by the sale of the old systems (but not alot).  System
response time for individual users was faster bacause there were
more systems, more total cycles, fewer people per machine.  The
"workstations" had more RAM than the old dinosaurs, etc.  It
worked pretty well - the system software was the same, backups
were made daily, users could request files retrieved from backup
with one or two day turn around.  Disk quotas were quadripled and
large spaces were made available for special projects.  The new
disk drives were larger capacity and cheaper by an order of magnitude.

While there, I was asked to spend a few hours helping a professor
in another department.  He had a machine donate to his
department.  It was a minimal machine, but fast, and good enough
for a user with a largish project.  100 MB of usable disk, good
CPU power,...  The only removable media was 8 inch floppy disk.

After giving him a hand with whatever it was, I asked if he made
frequent backups of the system.  He said, "no, it would take too
long".  So, I asked if he made backups of the files he was
working with.  He grumbled, but the answer was still no.  I
advised that backups of some sort should be made.

Needless to say, he never made any backups.  Four months later, I
get a call - it seems his hard disk has crashed, and needs to be
replaced.  He wants to know if anything can be done to retrieve
his data.  Sigh.  Education is expensive.

I have two computers at home.  I do backups in proportion to how
much data I feel I can afford to lose.  The only people I know
who do regular backups of their home machines are those who work
exclusively from floppy disk.  In one case, the person has a hard
disk, but only for system software.  All data files are written
to floppies.  Only those floppies that are at all important get
copied.  In this case, it is the only discipline that works.

The workstation concept works well when the maintenance is done.
At work here, we have a LAN with a bunch of workstations.  We
have someone who is responsible for backups, etc.  We have gone
the route of hiring cheap (inexperienced and unknowledgeable)
help, and found it to be unreliable, and therefore worse than
useless.  Education is expensive, fortunately, we didn't lose
much.  The next step is to learn to make sure that the office
space doesn't get hotter than 80 F on summer weekends...

Our workstations are fast enough so that a central mainframe is
not required.  The workstations are connected well enough that I
can do what is required for my current project, whereever that is
taking place, without getting out of my chair.  I have CPU cycles
and disk throughput to spare, and it has been a blessing.  The
local CPU drives the graphics of my screen without being
completely drained.

Stephen.

news@haddock.ima.isc.com (overhead) (11/29/89)

In article <1989Nov26.204924.24209@world.std.com> bzs@world.std.com (Barry Shein) writes:
>The real robustness people are concerned about is not how much their
>system goes down (these days things don't go down a whole lot, unlike
>a few years ago), it's "policy" robustness that's the concern.

Systems do go down.  I'm satisfied with
o At most one day of data lost irrecoverably, per year.
o At most one day of system unavailability, per month.
  I retain the right to be upset when it happens.

>If I compute on a main facility then I live with *your* rules on
>things like disk storage or computing resources and even whether or
>not I can use the computer at all.

The systems administration and architecture we have in the office is:
o A workstation per desk.
o Each workstation has good CPU, disk throughput, tape system, networking,
  and whatever else it needs in particular.
o The central administrator sees to it that the network works.
o Individuals generally keep most of their data on their home machine,
  but projects also get assigned to random machines.  This is not
  centralized.
o A central administrator sees to it that backups are done.
o Some workstations provide central services, such as man page servers
  or source code disk space.
o There is a central mail/news hub, centrally administrated.
o Network numbers are centrally administrated.
o People generally have accounts on all machines, or can get them
  at will as required.

This system largely gives good incentives for everyone.
Cooperation is encouraged.  It is clear that providing access to
your machine is a small price to pay (and it is) to getting
access to some service elsewhere that you don't have to support.
It should be pointed out that this system is neither completely
central, nor completely decentral.  It is tailored to the local
situation.  For example, it should be noted that there aren't any
novices on the network.

>...a centralized photocopy service... has faster, fancier, larger
>copiers which are better maintained than anything you're likely to
>find in your office.

Yes.  I still use the local copier for two pages - the turnaround
is better (a couple minutes).  However, the local copier is
getting better, and the high quality double sided, coalating
copier has arrived here.  This office is not big enough to
warrant the traditional copy center.  Besides, it's 200 yards to
the nearest print shop.  Still, I was willing to do 80 pages
locally if I didn't have to do it often.

The technology in both cases is providing more capability more
cheaply.  The administrator for the local copier will probably
end up being the secretary.  (S)he'll be one of those people
who uses it often enough to know how.

>That's why it's important to tear down the last few architectural
>constraints of smaller systems (smaller probably means under $500K in
>relatively smaller numbers of units, under $100K in large numbers and
>under $10K in huge numbers.)

Tear down what?  It's "just" a matter of software that provides
ease of use and is largely auto-administered.  This is expensive
to develop, but should probably be bundled with systems.  Education
is even more expensive.  It will be hard to get people to provide
themselves with backup.  Off site backup?  Unlikely.

Stephen.
suitti@haddock.ima.isc.com

sjs@spectral.ctt.bellcore.com (Stan Switzer) (12/06/89)

Is desktop, network, bitmapped-display UNIX computing a fad?

Probably.  As long as you are just doing terminal-emulation in several
windows, you might just as well have two or three vt100 terminals and
job control.

I seriously doubt that networked Macintosh users are too worried about
whether about whether they should go back to traditional timesharing,
but at the same time if I am typing troff commands into a UNIX
workstation, then I might as well be using GCOS.

The problem is that workstations are not enough like Macintoshes and
vice versa.

But UNIX on a desk?  No.  Obviously this is crazy.  The only reason
UNIX is the operating system of choice for workstations is because it
was at the right place at the right time.  But that's all water under
the bridge now.

What next?

Stan Switzer  sjs@bellcore.com

dgr@hpfcso.HP.COM (Dave Roberts) (12/16/89)

Stan, Stan, Stan...

>Is desktop, network, bitmapped-display UNIX computing a fad?

>Probably.  As long as you are just doing terminal-emulation in several
>windows, you might just as well have two or three vt100 terminals and
>job control.

Yea, except that now I can just flick my wrist and be in another system
instead of having to swivel my chair.  Not to mention all the desk space
that all those vt100s would take.  I can also easily copy things like
text from one window (system) to another when I need to type something
that I just typed.

>I seriously doubt that networked Macintosh users are too worried about
>whether about whether they should go back to traditional timesharing,
>but at the same time if I am typing troff commands into a UNIX
>workstation, then I might as well be using GCOS.

I try to stay as far away from troff as I can (LaTeX is so much nicer :-).
The fact is, though, there are some really nice WYSIWYG editors and page
layout programs coming out for workstations.  Software seems to be the
problem.  It just isn't (software people are screaming can't) be written
fast enough to take advantage of the hardware.

>The problem is that workstations are not enough like Macintoshes and
>vice versa.

Well, I have a Mac at home (albeit an old 512K that's downright pitiful)
and I've got to tell you that my workstation is so much nicer.  If I had
a choice of either (ever a new hot [yea, right... an '030 running at <20MHz]
Mac) I'd take the workstation in a minute.  In a sense a fully loaded
Mac is a workstation (but that <20MHz thing really kills me).  If I could
get the software for it that I could for my workstation then there might
not be much to differentiate between the two.

>But UNIX on a desk?  No.  Obviously this is crazy.  The only reason
>UNIX is the operating system of choice for workstations is because it
>was at the right place at the right time.  But that's all water under
>the bridge now.

No way. I disagree completely.  My throughput with some half-assed operating
system like MS/DOS or the Mac OS would be a lot less.  I'm not arguing that
UNIX is the end all OS, but the fact is, it's still the "right time".  There
isn't currently anything that would improve my throughput over UNIX.
I'll admit that a novice user can do much more with something like the Mac
OS faster than a novice UNIX user can.  But in a compute intensive, large
data size, engineering environment, there's just no way that the Mac OS
would hold up.  There were some things that the Mac popularized (note that
I didn't say originate :-) like mice and icons and stuff that certainly
make things nice, but that's why X Windows was created.  It's nice to be
able to grep files when I want to just find a line of data without
having to wait 2 minutes to start the WYSIWYG editor and
scroll through the thing looking for what I want.  It's so much easier
being able to set up an awk or sed filter to send some data through to
make it compatible with the next program that I'm going to use on it,
rather than just not being able to use the next program on it at all.
For a user who just wants to use Pagemaker all day long, a PC (of whatever
sort) is fine, but for someone who needs to do a lot of things all at
once, UNIX is it.

>What next?

Don't know.

>Stan Switzer  sjs@bellcore.com
>----------

The way that I see this discussion is that people are arguing that workstations
or PCs or mainframes are supposed to be good for everybody for some unknown
reason.  The fact is that they all serve their purpose in some way and
we have a need for all of them.  A secretary who is just doing some word
processing doesn't need all the power or data storage of a workstation.
But she certainly needs the interactivity of a PC, as opposed to the skippy
jumpy response afforded by a large mainframe that's also working on an
accounting run.  For me, I need a workstation.  I need a large color bitmapped
display with a fair amount of local disk space and a compute engine that
really hums.  The reason: I need the interactivity and data storage in order
to manipulate large data structures containing things like schematic capture
data.  If I want to do a spice model, I'll farm it out to a crunch machine.

Dave Roberts
HP Fort Collins
dgr@hpfcla.hp.com

No disclaimer... I don't make mistakes.

gillies@p.cs.uiuc.edu (12/20/89)

> No way. I disagree completely.  My throughput with some half-assed operating
> system like MS/DOS or the Mac OS would be a lot less.  I'm not arguing that

Why not try prototyping on a Mac in C sometime?  You'd be surprised how
much time UNIX pisses away with its N-pass memory-thrashing
compilers and ASCII include files.

I'd like to see ANY 16Mhz 68020 UNIX box compile 45,000 lines per
minute, then bind & launch 5 seconds later (I'm talking about
THINK/LightspeedC for the mac).  8-).

Don Gillies, Dept. of Computer Science, University of Illinois
1304 W. Springfield, Urbana, Ill 61801      
ARPA: gillies@cs.uiuc.edu   UUCP: {uunet,harvard}!uiucdcs!gillies

mrc@Tomobiki-Cho.CAC.Washington.EDU (Mark Crispin) (12/20/89)

In article <76700088@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
>
>> No way. I disagree completely.  My throughput with some half-assed operating
>> system like MS/DOS or the Mac OS would be a lot less.  I'm not arguing that
>
>Why not try prototyping on a Mac in C sometime?  You'd be surprised how
>much time UNIX pisses away with its N-pass memory-thrashing
>compilers and ASCII include files.
>
>I'd like to see ANY 16Mhz 68020 UNIX box compile 45,000 lines per
>minute, then bind & launch 5 seconds later (I'm talking about
>THINK/LightspeedC for the mac).  8-).

This is a silly argument.  One way to improve programmer productivity
is a fast compiler.  Another way to improve programmer productivity is
a pleasant operating system environment.  Still another way is a good
debugging environment so your compiles are to add functionality or fix
bugs rather than "let's try it this way and see if it works."

In my (admittedly biased) opinion, MS-DOS's development environment is
mediocre and the Mac's is dreadful.  At its best, MS-DOS provides sort
of a low-grade Unix.  The only redeeming feature of the Mac
environment is the availability of fast C compilers, since you're
going to be doing a lot of recompiles.  Needless to say, you lose even
that if you're forced (as I was in my previous job) to use MPW.

The best operating system and debugging environments I ever used were
the DEC-20 and Xerox Lisp, respectively.  Although Unix fans have
(rightfully) pointed out the powerlessness of the DEC-20's command
decoder compared to the shell, the operating system hidden beneath it
was quite a bit more powerful than even Mach today.  Xerox Lisp's
development/debugging environment was seductive to the point that you
rarely bothered to compile your programs.

The NeXT, which I currently use, is sort of a cross between a Mac and
a low-grade Xerox Lisp/SmallTalk environment, in some ways combining
all the disadvantages of each.  Fortunately, it has Mach/Unix
underneath, so you can always get what you want done done.  In fact,
perhaps the best feature of the NeXT is that it isn't particularly
technologically advanced.

Mark Crispin / 6158 Lariat Loop NE / Bainbridge Island, WA 98110-2098
mrc@CAC.Washington.EDU -- MRC@PANDA.PANDA.COM -- (206) 842-2385
Atheist & Proud -- R90/6 pilot -- Lum-chan ga suki ja!!!
tabesaserarenakerebanaranakattarashii...kisha no kisha ga kisha de kisha-shita
sumomo mo momo, momo mo momo, momo ni mo iroiro aru
uraniwa ni wa niwa, niwa ni wa niwa niwatori ga iru

pcg@aber-cs.UUCP (Piercarlo Grandi) (12/21/89)

In article <76700088@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
    
    Why not try prototyping on a Mac in C sometime?  You'd be surprised how
    much time UNIX pisses away with its N-pass memory-thrashing
    compilers and ASCII include files.
    
    I'd like to see ANY 16Mhz 68020 UNIX box compile 45,000 lines per
    minute, then bind & launch 5 seconds later (I'm talking about
    THINK/LightspeedC for the mac).  8-).

Well, there are fast compilers for Unix. It is not a matter of CPU
architecture, or even system architecture, or even OS architecture.  For
example, there is an interesting paper in the November SIGPLAN...  They are
not as fast as Think C, but probably they could become like that.
Unfortunately, many Unix programmers on single user workstations with 10
MIPS CPUs and as many megabytes tend to have a cavalier attitude to resource
consumption, even more about memory then speed.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

dgr@hpfcso.HP.COM (Dave Roberts) (12/22/89)

In article <whoever goes and reads this anyway> Peter Grandi writes:

> Unfortunately, many Unix programmers on single user workstations with 10
> MIPS CPUs and as many megabytes tend to have a cavalier attitude to resource
> consumption, even more about memory then speed.

Thanks for coming to my aid about unix compilers, and the above is certainly
true.  The then again, I really don't care.  I get paid for doing a job on
my (God, I wish) 10 MIPS workstation.  I'm not paid to conserve my memory
or my MIPS or whatever.  If it's there and it helps me get my job done any
faster/better, I'm going to use it.  Admittedly, it would be nice if some
of those applications programs that I run were a little smaller so that I
could fit more of them in memory at once without having to have a huge swap
disk, etc.  But hey, if I need it, just as long a I can run it, I'm going
to use it.

Dave Roberts
Hewlett-Packard Co.
Ft. Collins, CO
dgr@hpfcla.hp.com
dgr%hpfcla@hplabs.hp.com

gillies@p.cs.uiuc.edu (12/23/89)

> Well, there are fast compilers for Unix. It is not a matter of CPU
> architecture, or even system architecture, or even OS architecture.  

Someone said "Macs have terrible [coding] throughput compared to
workstations".  Now you're on the defensive?  I didn't expect my
arguments to work *that* well.

PC's have some advantages.  You can dedicate 100% of the CPU for
compiles.  You can run code benchmarks that are absolutely 100%
reproduceable.  I don't believe this is possible under preemptive
UNIX.  You must run a much longer benchmark on UNIX to wipe out the
timing noise due to context switches.

Most PC development systems integrate compile, bind, load, & launch
without touching the disk -- not to page, not to write a binary, not
touching the disk AT ALL.  I believe that this is physically
impossible under [some types of] UNIX.  This shortens the debug loop.
The quality of PC debuggers is catching up to workstations, and may
soon surpass workstation debuggers.  I suspect there are more PC
programmers today than workstation programmers, so the development
tools are improving.

In summary, sometimes PC's can be superior to workstations.  Sometimes
they are not.  That is all.

beyer@cbnewsh.ATT.COM (jean-david.beyer) (12/26/89)

In article <76700096@p.cs.uiuc.edu>, gillies@p.cs.uiuc.edu writes:
> reproduceable.  I don't believe this is possible under preemptive
> UNIX.  You must run a much longer benchmark on UNIX to wipe out the
> timing noise due to context switches.

... but I would imagine if you had enough IO buffer space, that you
could do all that without having the IO cost you anything in your timing.
The IO would happen eventually (say, by the next internal sync point),
but that could be after your work was done.


-- 
Jean-David Beyer
AT&T Bell Laboratories
Holmdel, New Jersey, 07733
attunix!beyer