[net.micro] IBM vs VAX/unix

PEARSON@SUMEX-AIM.ARPA (02/27/84)

From:  William Pearson <PEARSON@SUMEX-AIM.ARPA>


	Since I was surprised by these results, I thought I would share
them.  Those of you who compare benchmarks may find them interesting.

	I have been working on a program that compare the amino
acid sequence of a protein (a character string from a 20 letter
alphabet, the strings are usually from 100 - 500 characters long)
against a data bank of all known protein sequences in order to
identify unknown sequences and look for evolutionary relationships.
The data bank includes about 2500 sequences totalling 500,000
characters.  The problem is a dificult one, since related proteins
will have both substitutions and insertions or deletions.  The most
rigorous algorithms require n*n in time and space, we have been using
a much faster algorithm that uses a hashing technique and both the time
and space required are much less.

	The program was written in 'C' for porting between many small
machines, initially on a VAX 11/780 unix system.  It is also now running
on an IBM-PC with a 320K RAM disk.  The program was compiled with the
Lattice 'C' compiler.

Execution times for a 146 character sequence vs 190K databank subset.
(it has to fit on a floppy!)

VAX 11/780	1:17.4
IBM-PC		6:58		the databank was stored on the RAM disk,
				using the floppy takes about 2 min more.

I am now tempted to think of "1/2 of a VAX" as simply 3 IBM-PC's.

	On another subject, a salesman from a high performance work-
station manufactuer informs me that the Motorola 68020 will be available
"off-the-shelf" in 3 months.  This seems to conflict with discussions
on the relative merits of the 16032 vs 68020 which have suggested
the 68020 was not off the drawing board (screen?).  Since we are in
the market for a machine using this technology, and have not been able
to find a manufacturer who is shipping 16032 systems yet (one
hears a lot about 1Q 84), perhaps the 68000 series is not dead yet.

Bill Pearson

-------

ab3@pucc-h (Darth Wombat) (03/02/84)

	William Pearson <PEARSON@SUMEX-AIM.ARPA> has compared the performance
of an IBM-PC with a Vax running unix for an application that (as far as I can
tell) involves large amounts of string pattern matching and correlation...
and says:

	"I am now tempted to think of "1/2 of a VAX" as simply 3 IBM-PC's."

	For your task, it certainly looks like the IBM-PC is a cost-effective
way to do your computing...and we could go round and round arguing about 
relative compiler efficiencies and use of native instruction sets and so on,
but consider the point given.

	The problem that this has touched in my mind is that your experience
is the sort of thing that lots of university computer honchos get stuck on
about the time that get Mac-attacked or PC-ized or whatever...and the
<Insert generic micro name> is not comparable to a Vax running Unix for
a very large class of tasks commonly done around places like this; for example,
I (and about 40 other people in a graduate class) are doing massive amounts
of image processing/analysis in Lisp -- and I just can't see this stuff running
on a personal computer.   Not to mention one colleague working on simulating
Lisp running on a parallel machine, or another doing diffraction tomography
computations -- or the hordes of undergraduates burning up the floating point
units running Spice (a circuit simulator/modeler).

	Why the near-flame?  I (often) question the wisdom of those who are
so enthralled with these cute little micros which are fine for teaching Pascal,
or acclimatizing people to computers, or even running smaller applications,
that they forget that a large number of us *need* megaflops...

	We have something like 2 dozen vaxes at Purdue, 3 CDC 6000-series
mainframes, a Cyber 205 supercomputer, and numerous other species of machines
ranging from 11/23's to 11/750's, and nearly all of them run flat out from
8 a.m. to well past midnight six days a week, and (later in the semesters)
sometimes more than that.  What are we going to do with a dozen IBM-PC's
or twenty Mac's that will *really* help this situation?

-- 

"Oh dear...I believe you'll find that reality is on the blink again."
Darth Wombat
UUCP: { allegra, decvax, ihnp4, harpo, seismo, teklabs, ucbvax } !pur-ee!rsk

rej@cornell.UUCP (Ralph Johnson) (03/02/84)

Microcomputers CAN replace most uses of VAXen.  Not all, of course, and
I wouldn't want to replace a VAX with any number of PCs, but there are
much more cost effective ways to provide VAX equivalent power than with
a VAX.  68000 Unix systems are surprisingly powerful.  I judge the Callan
Unistar-100 to be about one fourth the speed of a VAX11/780 for long
compiles.  It is about 60% of a VAX for non-floating point computation
intensive tasks.  A 68000 with 2 or 4 Meg of memory is probably the cheapest
way to run Lisp.

The major problem with microcomputers is slow, small disks.  The speed ratio
of applications on the Callan to those on the VAX depends primarily on the
amount of disk accesses that they require, since the Callan uses slow 5"
winchesters.  Given a fast network with large file servers, 68000s could
compete with VAXen in every application that does not use floating point.

The major advantage that microcomputers have is that each user can have his
own machine.  Every VAX user knows the "it's two in the afternoon and the
load factor is too high" blues.  A VAX may beat a Callan at 7 AM, but a
Callan will get my job done faster at 3 PM.  While one nroff job saturates
a VAX, it takes 10 nroff jobs to saturate 10 microcomputers. (And the jobs
will probably finish two to four times as quickly.)

Two dozen VAXen with disks and other peripherals cost on the order of 2 to 5
million dollars.  For two million dollars you can get 200 Callans at single
quantity prices - the quantity discount will buy an ethernet and some file
servers.  A VAX can support any number of people if they are not doing
anything, but even two or three nroff jobs will slow it markedly.  Thus, if
people are doing heavy computation, ten 68000 systems will provide more
throughput and better response time for a VAX, for less money.

Notice that I do not necessarily think that 68000s are the best micro or that
Callan makes the best 68000 Unix computer.  However, I do think that these
alternatives provide more Unix power for the dollar than a VAX, thus any
argument that some other micro is even more cost efficient is further
argument for my case.

One reason to get a VAX is to use some special piece of equipment or
software.  Eventually, array processors and graphics equipment will be
interfaced to microcomputers, and the software will get ported to 68000s.
For example, I said that 68000s were a more cost effective way to run Lisp
than VAXen, but the various 68000 Lisp systems seem to be in various stages
of being ported from the VAX right now; I don't know of any which is supposed
to be ready for production use.

A major problem with connecting lots of microcomputers together is that it
requires lots of unwritten software.  SUN is supposed to have a complete
system soon, and there are a few systems like Appolo which are finished, but
quite expensive.  However, everything that I have said is true in theory, if
not in practice.

Ralph Johnson  rej@cornell    cornell!rej

tbm@hocda.UUCP (T.MERRICK) (03/05/84)

Seems to me,

Simply stated: "Computers do not scale in performance/dollar"

That is you can buy a C64 for <$200, but for $2000 you cannot buy
a single machine that will support 10 users.

I'm not sure why this is, but you cannot pick up a full sized
aircraft by one of its appendages either, but you can do so with a model.

Any experts out there who can comment on this one?

Tom Merrick ATT Bell Labs

rej@cornell.UUCP (Ralph Johnson) (03/05/84)

It used to be said "The power of a computer goes up as the square of
its price."  This was probably due to marketing decisions and not
technology, but it is certainly no longer true.  VLSI mass production
lets anything that can be put on a chip be cheap, thus, a 68000 is
cheap (compared to a VAX) while a winchester is not cheap.  Thus,
processing power costs zero, if it fits on a single chip, and is
expensive if it does not.  Unfortunately, it is hard to replace disks
with processors.  :-)

Multiuser systems have the added expense of supporting all the security
and resource allocation overhead that single user systems do without.
Besides, I only want to communicate with others, not share my computer
with them.  A collection of communicating single user systems is better
than a single multiuser system.

Ralph Johnson    rej@cornell      cornell!rej

phil@unisoft.UUCP (Phil Ronzone) (03/06/84)

>> Microcomputers CAN replace most uses of VAXen.  Not all, of course, and
>> I wouldn't want to replace a VAX with any number of PCs, but there are
>> much more cost effective ways to provide VAX equivalent power than with
>> a VAX.  68000 Unix systems are surprisingly powerful.  I judge the Callan
>> Unistar-100 to be about one fourth the speed of a VAX11/780 for long
>> compiles.  It is about 60% of a VAX for non-floating point computation
>> intensive tasks.  A 68000 with 2 or 4 Meg of memory is probably the cheapest
>> way to run Lisp.
>> ...
>> The major advantage that microcomputers have is that each user can have his
>> own machine.  Every VAX user knows the "it's two in the afternoon and the
>> load factor is too high" blues.  A VAX may beat a Callan at 7 AM, but a
>> Callan will get my job done faster at 3 PM.  While one nroff job saturates
>> a VAX, it takes 10 nroff jobs to saturate 10 microcomputers. (And the jobs
>> will probably finish two to four times as quickly.)
>> ...
>> A major problem with connecting lots of microcomputers together is that it
>> requires lots of unwritten software.....

Just to let y'all know -- here at UniSoft (makers of UniPlus+, our port of
V7, SIII, and SV, to the Callan, among many others) we have BNET running on
UniPlus+. To offload our VAX'en from their ``2:00 blues'', we ship
everything from C compiles to troffs over the Ethernet to one of many 68000
boxes. Rather than add VAX'es, we add 68000 boxes. It is a really economical
way to go.

ron%brl-vgr@sri-unix.UUCP (03/12/84)

From:      Ron Natalie <ron@brl-vgr>

Oh well, time to start the big computer/little computer flames.
I agree with all of your first paragraph that anything that can
be put on one chip is cheap, etc.  One of the big problems is that
industry has this "one chip" CPU brain-set.  Companies can't seem
to succeed by releasing a "two chip" CPU even if it costs only
50% more but provides 300% of the capacity.  This dates back even
to the 6800.  Original plans called for an even nicer implementation
of this chip, but it had to be scaled back because it wouldn't fit
into one IC and marketing decided no one would buy a two chip CPU
regardless of how good it was.

You seem to limit yourself to the idea that only microprocessor CPU's
are cheap.  *WRONG*  The CPU these days is almost always the cheapest
part.  You can get a 11/780 CPU (just the boards and backplane) for
about $50,000.  However the minimal 780 system costs $140,000 (with
the console terminals as the only peripheral).  The minimal practical
configuration is abou $270,000.

I totally disagree with your last paragraph.  While a IBM PC is a wonderful
computer for home and I really like toting a Grid Compass on travel with me
but I wouldn't want to really have to work on one.  A central computer has
some of the following advantages:

1.  Since the cost is spread out over several users you can purchase
things like more expensive peripherals, large amounts of memory, etc...
that are used less frequently by any one user, but allows it to be
accessible when needed.

2.  There is more CPU headroom for the reason specified in #1.

3.  Someone else usually backs it up.

4.  System support is shared by all the users.  Which means one copy
of system software.  (On our systems, this is put in RAM disk).

5.  Communications between users on a single machine is almost always
easier than networking.

-Ron

eb@ecn-ee.UUCP (03/17/84)

#R:sri-arpa:-1695900:ecn-ee:14100003:000:1024
ecn-ee!eb    Mar  9 22:59:00 1984

/***** ee:net.micro / cornell!rej /  6:56 pm  Mar  5, 1984 */
	Multiuser systems have the added expense of supporting all the security
	and resource allocation overhead that single user systems do without.
     Why does everybody associate "single process" with "single
user?"  A "single user" computer has one user.  If that user
wants to create zero or more background processes that capability
should be there.  However, just because a process is executing
in background, doesn't mean it has no bugs.  If I have several
background processes running I certainly want them protected
from each other and the system better be able to allocate resources.
     I think rej@cornell is a little bit behind the times.  "Single
user" systems can be very powerful.  The hardware is available
today to support these systems.  It does not take a Vax either.  A
programmable maching is about all that it takes.  I agree a Vax
will outperform an 8086.  So send big things to the Vax.

Ed Blackmond
pur=ee!eb
eb@purdue

/* ---------- */

rconn%brl@sri-unix.UUCP (03/18/84)

From:      Rick Conn <rconn@brl>

	I'm not sure what a mere dozen IBM-PC's or twenty Mac's will do
for you, but how about 6,000 IBM-PC's?  I think word of this will start
getting around quickly, and my information is only sketchy (but reliable).
MIT, with $Million support from both IBM and DEC, is setting up a rather
massive net of IBM-PC's and MicroVAXen (the numbers I heard were 6,000 PC's
and 160+ MicroVAX).  All of the PCs will be running a varient of UNIX from
a 68000 card, and the MicroVAX will act as local area network controllers
to provide communication with the larger mainframes et al.  The PC's will
be scattered about the campus, in dorms, classrooms, labs, etc.

	This was just an interesting tidbit.  You certainly have a point,
however, about your applications.  I agree that there are a large number
of applications which don't fit the PC world.  I've seen a CYBER 175
become bogged down under heavy LISP applications, and I've also seen
a VAX 11/780 die (in terms of response time) when just two or three users
are running certain compilations.  In these circumstances, in which
response time and performance from a human-interface point of view
degrade so fantastically, the application of PCs (IBM or otherwise) in
conjunction with the mainframes makes a lot of sense.  From the point of
view of the human, most spend the majority of the time in an editor or
similar tool.  The PC can be used to offload this type of processing quite
easily, and the user realizes fantastic response as a matter of routine.
When the user has finished his edit and then needs the compilation to be
performed, linking with the mainframe, transferring the file, starting
the compilation (perhaps as a batch job to start at 1 or 2 AM), and
then returning to the PC and continuing on with other work is a reasonable
scenario.  Additionally, while the PC can't be expected to do the compilation
or application by itself, it may act as a preprocessor, performing a
preliminary syntax check and looking at minor details before shipping
the file off for the main processing.  This can save a lot of time and
effort.

	A real-life example of this concept which has been around for some
time is PLATO.  Under the old PLATO IV, the CDC 6000 mainframe did all
the processing.  The terminals had some intelligence in them in that they
could receive a "DRAW CIRCLE at x,y with RADIUS r" command from the system
and do the graphics themselves.  Many high-order functions like this were
supported in the terminal itself, and PLATO IV claimed it could support
1,000 interactive terminals from the one mainframe.  Now we have PLATO V,
where the terminals contain micros.  Local editing of lessions, compilations,
execution, and other functions can be performed in the terminal without
accessing the mainframe.  The mainframe serves mainly as a data repository.
I have heard claims that PLATO V can support 10,000 terminals from the
ONE mainframe!  Most impressive.

	So, I feel that you certainly have a good point in emphasizing
that there are applications which micros can't meet at this time.  Leave those
to the mainframe.  But there is also the point that there are many
applications which micros CAN meet, and, combined in a distributed
sense with a mainframe, the flexibility, responsiveness, and utility
of the pair can help you to realize a much better working environment.
Software is the key to all of this, and in many cases, you would have to
write it yourself to meet your applications.  Precompilers on the micros,
communications software, and other utilities which support the distributed
environment are necessary before you can begin to make effective use
of the system for your original application.  MIT, as I understand it,
plans to write most, if not all, of their support software in-house.
The Univ of Illinois did indeed write PLATO and microPLATO themselves also.

		Rick

LCAMPBELL%dec-marlboro@sri-unix.UUCP (03/19/84)

From:  Larry Campbell <LCAMPBELL@dec-marlboro>

When you refer to "6,000 IBM PC's (running Un*x on 68000 boards)",
it's probably a bit more accurate to say "6,000 68000-based Un*x
systems with power supplies and keyboards by IBM"...
   --------

mats@dual.UUCP (Mats Wichmann) (03/20/84)

In the midst of this topic I want to toss in another tidbit. UC Berkeley
was just given a rather large grant from IBM, which will take the form
of a s**t load of PC's. Nobody quite knows what the disposition will be
yet; everything is still in the formative stages. *BUT*, the Physics Depart-
ment has been using the University's computing facilities the past few years
because they have never gotten the money to replace their old IBM 1620 (yes, 
you read that right!!!); now it looks like they will never get their own 
machine because people are starting to say "why should we buy a large 
expensive machine, when we can get everybody a PC for their desk". The PC 
clearly does NOT meet the needs of large number crunching applications that
physicists and astronomers, to name two groups, are likely to perform. But 
becuase of the easy availability of the PC's, it looks like they may not be 
able to get funding for a machine to do these sorts of things. The problem
here seems to be that the sort of hype surrounding the PC, and other
similar machines, is creating a blindness amongst the decision-makers
(who are typically NOT computer users to any great degree) to the total
needs to their computing environment. By all means, use a PC if it is
appropriate for your needs, but let's not completely ingnore the need
for larger computational machines as well.


Another perspective is my own. When I was a student, competition for
the computing facilities at UC was fierce (although not quite as bad as
it is today). All undergraduate classes were confined to a bunch of PDP 
11-70's, only one of which belonged to the CS dept. outright. The purpose
was to provide a programming environment for testing out certain concepts
being taught in the classes. This environment could be provided by a micro
of the ilk of an IBM PC. Basically, what I needed for COURSE work was
the pascal interpreter/compiler, a c compiler, editor, and a couple
of other utilities (program beautifiers, printer spooler). Most of what
I learned about UNIX, and what made me employable as I came out of college,
was done on my own time, by browsing through the system and trying various
things out. Trying to put each student on an individual machine almost
eliminates this aspect - which is there for anyone who cares to use it
right now. With a small disk on the system, the amount of stuff that
can be stored permanently is quite limited. I had the opportunity to
browse through those portions of several huge disks which were not
protected - and managed to learn quite a bit from that. I would not
want to give this up. I don't really object to some of the load
being offloaded onto smaller machines, but not the entire curriculum.
And the lower-level the machine, the less the value to the student. Thinl
of it this way - most of you are involved with the UNIX system in some
way or another. If it were your decision to hire, say four years from
now when the current batch of undergrads have finished college, a 
programmer who had done all of his course work on one machine only, what 
would look more attractive:

1) Apple II
2) IBM PC Running MS/DOS
3) IBM PC running PC/UX
4) 68000-based machine with UNIX System V or BSD 4.2
5) VAX 11/780

MY contention is that the Apple is already obsolete, and the technology of
the PC is already very near to it. The 68000 or the VAX, on the other hand,
present lots of interesting features, that even if nobody is building those
particular machines any more, would seem to me to bode well to picking up
whatever the current technology is. Four years from now, probably all of
those machines will be near obsolete.

Okay, just my opinion, but certainly grounds for some thought, isn't it???
If it were my decision, the only way I would put students on something
like the PC would be if they were truly well networked together with
some bigger machines, such as Vaxen, so that the resources were there for
those who needed/wnated them.



	    Mats Wichmann
	    Dual Systems Corp.
	    ...{ucbvax,amd70,ihnp4,cbosgd,decwrl,fortune}!dual!mats

  It now became apparent (despite the lack of library paste)	 
  that something had happened to the vicar;	[ Edward Gorey ]

cowan@Udel-Relay.ARPA (03/23/84)

From:      Ken Cowan <cowan@Udel-Relay.ARPA>

	Your comments about PC's vs. VAX/unix were interesting.  I
happen to agree with what you said.  Unfortunately, I'm afraid the
audience on this net understands the "use PC's for what they are
good for" philosophy.  Like you said, it is those administrators
who don't really use computers that need to learn more.

	This brings mind a similar situation.  Everyone agrees that
a VAX 11/780 just isn't as powerful as say an IBM 3081.  What DEC
seems to be doing is providing a very nicely integrated network
(VAXCluster) so they won't loose people (or as many) to other vendors.
A cluster permits people to transparently access files on a remote
machine, access them from a node who sole purpose is a file server (HSC50),
and access remote print and batch queues as if they were local (under VMS).
This clearly won't compete with mainframes running single jobs that require
huge resources, but it does permit dispersion of load with hassling the
user to do it.

	Is there already something out there for micros that does
analogous things?

						KC
						cowan@udel-relay.arpa

ron@Brl-Tgr.ARPA (03/23/84)

From:      Ron Natalie <ron@Brl-Tgr.ARPA>

Actually VAXCluster is a poor excuse for not having a new machine
out in over six years.  It's expensive too (albeit not as bad as
going IBM).  If all you want is UNIX and aren't locked into UNIBUS
peripherals there are a whole handful of faster/cheaper minis on
the market with better support.

-Ron