[comp.arch] Speed Kills

fouts@bozeman.ingr.com (Martin Fouts) (06/09/90)

A) Some apparently random thoughts:

  1) Processors are much faster than programmers:

    In 1979, Kuck published a text book on computer architecture (Sorry I
    can't remember the reference) in which he claimed that between Eniac
    and then current machines there was a (roughly) six order of magnitude
    performance improvement in computer systems, based on the observation
    that Eniac took (his claim) 300 milliseconds to do a floating point
    add and then current supercomputers (I forget which one) took 300
    nanoseconds.  Since then the add time is down to about 3ns (actually
    4, but we're into orders of magnitude here) so I would claim that
    another decade has added two more orders of magnitude for a total of
    10^8 improvement in raw compute speed.
  
    In the same chapter he recalled that it took "about an afternoon" to
    wire up Eniac to solve a simple system of linear equations.  (let's
    call that 4 hours.) I would claim that it currently takes the same
    length of time to write the program and/or enter the data needed to
    solve a system of equations of about the same size.  However, I'll be
    willing to give you that a "power user" with the data on line and a
    good canned system can solve the problem in .4 hours (24 minutes)  At
    most one order of magnitude.  I would argue that it couldn't be done
    in .04 hours (2.4 minutes = 144 seconds) and I think everyone would
    agree that it can't be done in .004 hours (14.4 seconds.)

  2) User friendly systems aren't getting any friendlier:

    Lee Feldstein gave an interview to a trade rag in which he pointed out
    that for his purposes, fancy laser printers and hot 386 systems had
    almost reached the power and quality where they could produce the same
    word processing quality in the same time as his old Kaypro/Diablo
    combination at only twice the cost and requiring only twice the time
    to do anything.  He was very unhappy with what he saw as a regression
    in the usability of computer systems.

  3) Programmers aren't doing new work, the are doing old work on new
     machines:
  
    I've asked several Un*x development managers how long it had taken to
    multithread their SVR3 kernel's and they've all given me the same
    length of time.  When I asked how long they though it would take to
    multithread SVR4, they all thought it would take the same time as SVR3
    had taken, even though they were using people who "knew how to do it"
    the second time.


B) Observations:

   1) Currrently most programming is porting old code to new machines.

     I don't have the precise numbers handy, but something like 90% of
     all programmers are at work either porting tools to new machines
     or upgrading software to work under new releases of operating
     systems.  Most of the rest are working on adding "new" features
     to operating systems, in an apparent attempt to make the systems
     more useful.

   2) CPU vendors don't help.

     Next year's Widget-X is going to go X times as fast as this
     year's widget *but* the time it will take to port and tune
     all of your codes is on the same order of magnitude as the
     time it is going to take to introduce Widget-X2 to replace
     widget X.  [This is not an exageration.  At one vendor's
     shop I know of they are already porting to an "X2" simulator
     while porting to real "X" hardware.]

   3) We don't know how to write reusable code.

     It takes so long to "fix" Un*x to be multithreaded because it is
     hard to follow the path:

                      +-  multithread --+
     original tree ---+                 +- merge --
                      +-  new stuff ----+

     Especially when the upper and lower branches are being followed
     by different organizations.  I would be willing to be that the
     effort needed to add sockets to SV was duplicated by more than
     100 separate shops...

C) Prediction:

   As the rate of introduction and obscelences of new generations of
   hardware increases the development of truely new software
   functionality will decrease, dropping to zero.  [I claim that this
   is an observation.  There hasn't been any "new" software since the
   middle 70s.]

   Further, the rate of introduction may increase to the point that
   it will be impossible to utilize the speed of the next processor
   before it is outdated.

   The result would be complete stagnation.

D) PLEA:

   Speed Kills.  This whole problem stems from the need to port
   software to new generation machines which were made incompatible
   with old generation machines because that was the way to make
   them faster.  Most of the programming talent is going into
   supporting the SOS (same old stuff) on yahc  (yet another hot cpu.)

   I propose that a use for comp.arch which is better than

      "my (insert noun) is (insert compartive adjective) than
       your (insert noun)"

   arguments would be to discuss the topic:

   What architectures can be proposed now which:

   1) are extensible in ways which allow high performance implementations
      without loss of compatibility.  (It can be done.  IBM did it
      in the 60s and got 20 years of extensibility from an
      architecture. They blew the next part though.)

   2) enhance programmer productivity by supporting reusability.

   3) Support improved *user* speed rather than hot *cpu* speed.


E) Why bother?

   There is a huge untapped market for "computrons" which will remain
   untapped until they are as easy to use as toasters.  They aren't
   going to get easy to use if all of our effort goes into porting the
   SOS to YAHP.

   Besides, I've had a lot of usable functionality at one time or
   another on one box or another, and I would rather have it all in
   one place at a usable speed than a little bit of it in each of a
   lot of places but each one fast.

F) Don't flame?

   Before you decide to attack the "nothing new under the sun"
   premise, consider your computer history very carefully.  In
   operating systems, languages, networking, and programming
   environments, I can find 15 to 20 year old systems which between
   them had all of the features you are going to think up.  (Including
   networks and multiprocessors.)

   The only thing you are going to be able to point out as
   advances are speed, cost, and some kinds of 3d graphics.

--
Martin Fouts

 UUCP:  ...!pyramid!garth!fouts  ARPA:  apd!fouts@ingr.com
PHONE:  (415) 852-2310            FAX:  (415) 856-9224
 MAIL:  2400 Geng Road, Palo Alto, CA, 94303

If you can find an opinion in my posting, please let me know.
I don't have opinions, only misconceptions.

jkenton@pinocchio.encore.com (Jeff Kenton) (06/12/90)

From article <447@garth.UUCP>, by fouts@bozeman.ingr.com (Martin Fouts):
> 
> A) Some apparently random thoughts:
> 
>	. . . 
> 
> F) Don't flame?
> 
>    Before you decide to attack the "nothing new under the sun"
>    premise, consider your computer history very carefully.  In
>    operating systems, languages, networking, and programming
>    environments, I can find 15 to 20 year old systems which between
>    them had all of the features you are going to think up.  (Including
>    networks and multiprocessors.)
> 
>    The only thing you are going to be able to point out as
>    advances are speed, cost, and some kinds of 3d graphics.
> 
> 

Thanks for a good article.

Lots of interesting points, and, of course, a nit to pick.  Quantity (speed)
affects quality.  If you can do something 1000 times faster than you could
before it makes a tremendous difference in what is practical and in how the
tools are used.














- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      jeff kenton  ---	temporarily at jkenton@pinocchio.encore.com	 
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

peter@ficc.ferranti.com (Peter da Silva) (06/12/90)

In article <447@garth.UUCP> fouts@bozeman.ingr.com (Martin Fouts) writes:
>    What architectures can be proposed now which:

>    1) are extensible in ways which allow high performance implementations
>       without loss of compatibility.

Well, the 68000 has already got 10 years of compatible implementations
under its belt, all the while being the second man on the totem pole. I
think 20 years not unreasonable.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

aw1r+@andrew.cmu.edu (Alfred Benjamin Woodard) (06/12/90)

There does seem to be a trend toward reuseable code although it is
more on a macro level than I think is good.

Examples are things like xwindows and Xt. When I say a macro level I
mean that this is one product that is used by all to get certain
functions to work together. Take a look at comp.unix.sources archives.
 
I would kind of like to see something like an newsgroup where you exchange
short routines that do specific functions, little black boxes if you
will. 

I don't know how favorablly companies will look at exchangeing
short peices of code but I think in the long run it will eventually
speed up programming considerably because there would be a huge number
of little routines that have accumlated over the years and programmers
could just assemble the peices to make something that would work
nicely. 

Another think that would be benificial about this would be
that having other people see your source would allow them to find bugs
that you didn't see and also allow them to tweak it so that it runs
faster maybe even suggest different ways of implemtaion that would be
more efficient.

-ben

news@haddock.ima.isc.com (overhead) (06/13/90)

In article <447@garth.UUCP> fouts@bozeman.ingr.com (Martin Fouts) writes:
>  1) Processors are much faster than programmers:

	You talk about RISC architectures and high end 386's
	running UNIX as if they were aimed at the home computer
	market.  How about something that is:

	My Mac II is faster than the TOPS 20 system my school
	bought in 1980.  It was two orders of magnitude cheaper.
	It has better software.  It wasn't available in 1975.

>    In the same chapter he recalled that it took "about an afternoon" to
>    wire up Eniac to solve a simple system of linear equations.

	I haven't felt the need to solve any linear equations at
	home, unless it were for some sound or graphics
	transform, and I'm just not aware of it.  Of course, not
	being aware of it is the best possible scenario.

>  2) User friendly systems aren't getting any friendlier:

	On the Mac, programs are very easy to learn.  The Mac has
	a very regular command set with online help.  It also
	does internationalization.  Friendly to more people.

>    Lee Feldstein gave an interview to a trade rag in which he pointed out
>    that for his purposes, fancy laser printers and hot 386 systems had
	      ^^^^^^^^^^^^
>    almost reached the power and quality where they could produce the same
>    word processing quality in the same time as his old Kaypro/Diablo
>    combination at only twice the cost and requiring only twice the time
>    to do anything.  He was very unhappy with what he saw as a regression
>    in the usability of computer systems.

	If an experienced Kaypro/Diablo person could produce a
	document as fast as someone similar who has spent a
	similar amount of time on a Mac/Laser Printer, I'd be
	amazed.  Just text?  We're talking typing time, minimal
	formatting, and print time.  My Mac spits out 5 pages per
	minute.  No Diablo ever did that.

	Diablo printers can still be purchased.  An XT clone with
	a Diablo is at least as good as a Kaypro/Diablo.  Laser
	printers can also do graphics.  If someone buys a new
	screwdriver to use as a chisel, whose fault is that?

>C) Prediction:

>   As the rate of introduction and obscelences of new generations of
>   hardware increases the development of truely new software
>   functionality will decrease, dropping to zero.  [I claim that this
>   is an observation.  There hasn't been any "new" software since the
>   middle 70s.]

>D) PLEA:

>   Speed Kills.  This whole problem stems from the need to port
>   software to new generation machines which were made incompatible
>   with old generation machines because that was the way to make
>   them faster.  Most of the programming talent is going into
>   supporting the SOS (same old stuff) on yahc  (yet another hot cpu.)

	It is easier to port stuff to a new architecture that
	runs an old OS than to port stuff from one OS to another
	on the same hardware.  For example, porting from 4.3 BSD
	on a VAX to a Sequent (Dynix) is tons easier (5 to 10
	orders of magnitude) than porting from the Mac OS to a
	Sun III.  What we need is standardization.  This also
	means verification.  It's being worked on.

>   What architectures can be proposed now which:

>   3) Support improved *user* speed rather than hot *cpu* speed.

	As you've noted, a new architecture isn't going to do
	this.  Only an old architecture will do this.  Thus, you
	want to build a machine using an old architecture (DEC
	still makes tons of money on the PDP-11), or you want to
	build an architecture that will survive to be old.

>E) Why bother?
>   There is a huge untapped market for "computrons" which will remain
>   untapped until they are as easy to use as toasters.  They aren't
>   going to get easy to use if all of our effort goes into porting the
>   SOS to YAHP.

	I don't see it as untapped.  UNIX machines haven't gotten
	there yet.  OS/2 isn't there either.

	In 1980(ish) Apple, Commodore, IBM went for it.  Many
	others didn't make it.  It is hard stick with an
	architecture for 20 years.

>F) Don't flame?
>   Before you decide to attack the "nothing new under the sun"
>   premise, consider your computer history very carefully.

>   The only thing you are going to be able to point out as
>   advances are speed, cost, and some kinds of 3d graphics.

	Since 1975?  How about integration?  My Mac has a drawing
	program that does integrated paint + draw with color and
	with mixed resolutions and 9 feet squared.  Word
	processing good enough for publishing color magazines,
	with automatic TOC and indices.  Real color processing
	and photo-manipulation.  Real databases, which can
	include sound, graphics.  Sound digitizing, editing.
	Real time synth controls, musical score editing &
	printing.  Animation.  Even programing and debug are tons
	better.  All applications can share data.  It is all
	easy to use.

	Software moves from research to the market.  It generally
	must be rewritten for the market.  Not just for the new
	machines, but for the new users.

	OCR software is becoming more versatile and usable daily.
	CPU & I/O speed are requirements.

Stephen.

gil@banyan.UUCP (Gil Pilz@Eng@Banyan) (06/13/90)

In article <447@garth.UUCP> fouts@bozeman.ingr.com (Martin Fouts) writes:
>  3) Programmers aren't doing new work, the are doing old work on new
>     machines:

Richard Stallman has some interesting things to say about the why's
and how's of this problem.  If you're not familiar with any of his
ideas on this subject you should check out the "GNU Manifesto"

Gilbert Pilz Jr. "sick, and proud of it" gil@banyan.com

amos@taux01.nsc.com (Amos Shapir) (06/13/90)

In article <447@garth.UUCP> fouts@bozeman.ingr.com (Martin Fouts) writes:
>
>A) Some apparently random thoughts:
>
>  1) Processors are much faster than programmers:

The trouble is, the spec for humans has been frozen for a million years
or so (well, at least 5750 anyway...).  The vendor doesn't seem to be
interested in this line any more, so no upgrades are going to be available. :-)

>
>  2) User friendly systems aren't getting any friendlier:

Yes they are.  A big screen with windows and a mouse is an order of magnitude
friendlier than a VT100, which in itself is friendlier than a tty and paper
tape.

>
>  3) Programmers aren't doing new work, the are doing old work on new
>     machines:

>B) Observations:
>
>   1) Currrently most programming is porting old code to new machines.

Most wheels have been invented a long time ago; maintenance has been
the bulk of programming work for quite a while.  That also depends on
what you call "old work" - features are being added to old programs
which would not have been feasible when these programs were released.

>   2) CPU vendors don't help.

Simply not true - most major vendors keep compatibility with older
products.  It may take some time to upgrade an application from a 286
to a 486, but it could run with the old features from day one;
upgrading from a NS32016 to a 32532 (plug, plug :-)) takes as long
as a reboot.  Most high-level languages in use allow
portability; in any case, it shouldn't take as long as designing the
next generation of CPU's.

>
>   3) We don't know how to write reusable code.
>

Speak for yourself...

>
>C) Prediction:
>
>   As the rate of introduction and obscelences of new generations of
>   hardware increases the development of truely new software
>   functionality will decrease, dropping to zero.  [I claim that this
>   is an observation.  There hasn't been any "new" software since the
>   middle 70s.]

See above about "old software".  New hardware allows us to dust off old
algorithms that were unused because they take too much time and/or space
for the old machines.  New features constantly appear to support new
types of hardware that simply didn't exist a few years ago.  Not
everyone can invent a new algorithm for "sqrt" (if only that qualifies as
"truely new software"), but even that has been done lately.


-- 
	Amos Shapir		amos@taux01.nsc.com, amos@nsc.nsc.com
National Semiconductor (Israel) P.O.B. 3007, Herzlia 46104, Israel
Tel. +972 52 522408  TWX: 33691, fax: +972-52-558322 GEO: 34 48 E / 32 10 N

gillies@p.cs.uiuc.edu (06/13/90)

Martin Fouts has some good points, but I'm going to be a jerk and add
my 2 cents anyway.

First, "most software vendors spend all their time porting code".
This is a GOOD thing.  It makes all computers run (essentially) the
same software, reducing learning time.  Since the market for portable
code in n times larger than the market for machine-dependent code, you
may invest n times more money in a clean design, additional features,
and providing for posterity, by allowing the software to escape from
orphan machines.

Now some of the time porting code is due to overdiversity in UNIX.
UNIX should take a lesson from the IBM PC.  PC vendors have the good
sense to test new PC's for compliance (hardware, BIOS) with the PC
"standard" (whole companies have been established to provide testing
services and consulting).  This could be done with UNIX, and it would
save a lot of useless porting time.

>    As the rate of introduction and obscelences of new generations of
>    hardware increases the development of truely new software
>    functionality will decrease, dropping to zero.  [I claim that this
>    is an observation.  There hasn't been any "new" software since the
>    middle 70s.]

New [I/O] hardware always begets new software, and there is no
slowdown in the introduction of I/O hardware.  I don't know what you
mean by "new software", are you talking about productivity tools
[visicalc, 1979?].  I would say that a large part of the 1980's was
spent learning how to integrate software at a level that was
unthinkable in the 1970's.  Programs in the 1970's never talked to
each other in nontrivial ways.  Imagine starting Mathematica on
one of 20+ different machines, displaying some 3-D plots on the
screen, rotating the viewpoint with the mouse [talking about the
macintosh now], copy the picture to a conference paper in a WYSWYG
word processor, and print it out on a postscript printer.  Six
different programs have just talked to each other {math kernel, math
user interface, window system, word processor, mac printing manager,
postscript engine} using 3 separate protocols {math kernel, PICT,
postscript} and (upto) 3 separate machines {math kernel, mac, printer}.

peter@ficc.ferranti.com (Peter da Silva) (06/13/90)

In article <4040@taux01.nsc.com> amos@taux01.nsc.com (Amos Shapir) writes:
> In article <447@garth.UUCP> fouts@bozeman.ingr.com (Martin Fouts) writes:
> >  1) Processors are much faster than programmers:

> The trouble is, the spec for humans has been frozen for a million years
> or so (well, at least 5750 anyway...).  The vendor doesn't seem to be
> interested in this line any more, so no upgrades are going to be available. :-)

It doesn't help that the available user groups tend to be more interested
in waiting for upgrades or petitioning the vendor than in trying to develop
new software and peripherals. This has started to change in the past few
hundred years in some parts of the world, but there is considerable pressure
from the more traditional organisations.

(actually, this whole thread should perhaps be redirected to talk.religion)
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

ggw@wolves.uucp (Gregory G. Woodbury) (06/14/90)

In article <447@garth.UUCP> fouts@bozeman.ingr.com (Martin Fouts) writes:
>
>    In the same chapter he recalled that it took "about an afternoon" to
>    wire up Eniac to solve a simple system of linear equations.  (let's
>    call that 4 hours.) I would claim that it currently takes the same
>    length of time to write the program and/or enter the data needed to
>    solve a system of equations of about the same size.  However, I'll be
>    willing to give you that a "power user" with the data on line and a
>    good canned system can solve the problem in .4 hours (24 minutes)  At
>    most one order of magnitude.  I would argue that it couldn't be done
>    in .04 hours (2.4 minutes = 144 seconds) and I think everyone would
>    agree that it can't be done in .004 hours (14.4 seconds.)

	Wait a minute.  It may have taken several hours (more like six)
to "program" the Eniac, it still takes more time to actually run those
solutions through the machine.   The later portions of this paragraph
are including solution time.  This is mixing apples and oranges.

	Besides, someone as fluent in programming a modern machine as
the person who could wire Eniac without really spending time planning it
would be able to write the program in about 3 minutes!  Wiring Eniac was
pretty boring, figuring out what to wire was the real job, and the
estimates of time for that are not available from this quoted source.

	I suspect that if one included all the necessary time (thinking
about the problem formally, planning the wiring diagram, doing the
wiring, and running the problem to solution [assuming no bugs]) then the
difference might be 3 orders of magnitude.  In talking about a "simple"
set of simultaneous equations, some packages today do allow you to just
plug in coefficients and go in less than 1 minute.

	In short, there isn't enough info here to really compare the
"programming time" costs.
-- 
Gregory G. Woodbury @ The Wolves Den UNIX, Durham NC
UUCP: ...dukcds!wolves!ggw   ...mcnc!wolves!ggw           [use the maps!]
Domain: ggw@cds.duke.edu     ggw%wolves@mcnc.mcnc.org
[The line eater is a boojum snark! ]           <standard disclaimers apply>

usenet@nlm.nih.gov (usenet news poster) (06/27/90)

In article <502@garth.UUCP> fouts@bozeman.ingr.com (Martin Fouts) writes:
>
>The underlying point behind the original posting was ... to try to get 
>some conversation going towards architectures which scale over time.
>
>It has been done once or twice in the past.  IBM got a lot of milage
>out of the 360 and Moto has done a reasonable job with the 68K.  How
>can we make those cases more common, rather than the exception?

DEC has gotten 15 years out of VAX/VMS.  How about the 8008/8088/80x86
line?  SPARC and MIPS will be around for quite a while.  The key seems 
to be achieving market success with an innovative design and then
supporting upward compatability.  Once you have a body of third party
software depending on your architecture, it will live a long life.
 
>--
>Martin Fouts

David States

peter@ficc.ferranti.com (Peter da Silva) (06/27/90)

In article <62863@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) and
in article <502@garth.UUCP> fouts@bozeman.ingr.com (Martin Fouts) are using
the word "lifespan" to mean different things:

Martin:
> I would like to see proposals for architectures with reasonable
> extended lifetimes and software tools [...] Moto has done a reasonable
> job with the 68K.

Bruce:
> The lifespan of a computer architecture is mostly determined by the amount
> of commercial software available for it.  The [...] Intel 8086 series...

Bruce is talking about how long the chip family sells. From this viewpoint,
the intel 80x86 family is likely to be considered a success. Martin is talking
about how long software written for one member of the family can be considered
to be making reasonably good use of the leading edge. Martin wants to encourage
chips that don't require new operating systems every N years just to take
advantage of the top-end chips.

Really, it's debatable whether the 80386 is even using the same architecture
as the 8088. It emulates the silly thing, and has a similar instruction set,
but the differences between 386 and 86 mode are pretty fundamental.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.
<peter@ficc.ferranti.com>

rpeglar@csinc.UUCP (Rob Peglar) (06/28/90)

In article <1990Jun27.032824.5138@nlm.nih.gov>, usenet@nlm.nih.gov (usenet news poster) writes:
> In article <502@garth.UUCP> fouts@bozeman.ingr.com (Martin Fouts) writes:
> >
> >The underlying point behind the original posting was ... to try to get 
> >some conversation going towards architectures which scale over time.
> >
> >It has been done once or twice in the past.  IBM got a lot of milage
> >out of the 360 and Moto has done a reasonable job with the 68K.  How
> >can we make those cases more common, rather than the exception?
> 
> DEC has gotten 15 years out of VAX/VMS.  How about the 8008/8088/80x86
> line?  SPARC and MIPS will be around for quite a while.  The key seems 
> to be achieving market success with an innovative design and then
> supporting upward compatability.  Once you have a body of third party
> software depending on your architecture, it will live a long life.
>  
> >--
> >Martin Fouts
> 
> David States

Market success is dependent on many factors.  One of these factors is
innovative design - but not in microprocessors, at least for the most
broad market, that of the personal computer (loosely defined as whatever
system a non-technical person would use on their desktop).

For the next market "up" (down ?), that loosely defined as whatever system
a technical (e.g. practicing computer scientist) person would use on their
desktop, innovative processor design has a greater impact on success.

Please define what market you mean in future posting.   The previous discussion
freely mixed vendors, processors, and operating systems as if those entities
were all equivalent.

IMHO, for the most broad market, the key to success is, in relative order,

	1.  Marketing
	2.  Marketing
	3.  Marketing

you get the point.  Marketing includes such issues as price, availability,
service, etc.etc.  Not processor design.  High-level (i.e. binary)
compatibility is important, but not as important as marketing.  

Rob
-- 
Rob Peglar	Comtrol Corp.	2675 Patton Rd., St. Paul MN 55113
		A Control Systems Company	(800) 926-6876

...uunet!csinc!rpeglar

fouts@bozeman.ingr.com (Martin Fouts) (07/09/90)

In article <2723@canisius.UUCP> pavlov@canisius.UUCP (Greg Pavlov) writes:

   In article <447@garth.UUCP>, fouts@bozeman.ingr.com (Martin Fouts) writes:
   > 
   > A) Some apparently random thoughts:
   > 
     I believe that most of this article is half-true.   But, while recently
     doing a task, I had happened to think about some of the same issues and
     came to the opposite conclusion:  that the same sort of task, which took
     me about two hours, would have taken me several days 10-15 years ago. And
     the hardware I am operating on now cost apx. 15% of the hardware I was using
     then.

     The faster processors permit me to use higher-level software: reasonably so-
     phisticated DBMS's and interactive/interpretative languages.  These chew up
     lots of resources but permit me do get much more done with less effort.

     I see the biggest gain in the ability to work iteratively in something app-
     roaching "real-time":  the responses to my "commands" come quickly and I can
     react and adjust subsequent queries accordingly.  This requires interpretat-
     ive software operating on fast processors (the latter to make the former
     effective).

     I personally see this as a big gain over the "good old days" of writing 
     Fortran code.  I don't miss them at all.

     greg pavlov, fstrf, amherst, ny

I agree with you.  I'm just saying that the hardware has gotten fast
enough now, lets concentrate on making it easy to use rather than
faster for a while.

Marty
--
Martin Fouts

 UUCP:  ...!pyramid!garth!fouts  ARPA:  apd!fouts@ingr.com
PHONE:  (415) 852-2310            FAX:  (415) 856-9224
 MAIL:  2400 Geng Road, Palo Alto, CA, 94303

If you can find an opinion in my posting, please let me know.
I don't have opinions, only misconceptions.