[comp.society] what to do with all those MIPS

p_nelson@apollo.UUCP (Peter Nelson) (04/20/88)

I wanted to raise a question about how rapid increases in compute
power available to individual users will affect the way we use
computers.      

Just lately, the amount of sheer compute power available to an
individual user has been taking huge leaps.  While noting that
MIPS is a poorly defined term (some say Meaningless Indicator
of Performance / Second), there is no doubt that there are 
about to be a lot of them out there.   My company (Apollo) recently
announced a workstation that will offer 40 - 100+ MIPS, depending
on configuration.  Startups Ardent and Stellar have also announced
high-performance products and we may reasonably expect that Sun,
HP, and Silicon-Graphics will have competing products on the market.
Currently prices for these machines are in the $70K - $90K range 
but competition and a growing market will, no doubt, lower them.

Modern workstations also allow the programmer to treat the network
as a virtual computer.  Paging across the network, subroutine calls
to other nodes, and distributed processing are all common to
architectures such as Apollo's.  If I want to do a 'build' or 'make'
involving dozens of compiles, I can distribute them accross the net
so they will take little more time than one or two compiles on a 
single machine.  Furthermore, the disk resources of the network,
which may be many 10's or 100's of gigabytes are all transparently
accessable to me.   I suspect that ('the network is the computer')
Sun may offer something along the same lines, and while I think that
our other major competitors are still trying catch up in this area,
clearly this is the way of the future. 

A few years ago the compute resources available to a single user 
may have been 1 or 2 MIPS and a few 10's of megabytes of virtual 
address space.  A few years from now a typical user will have 100
MIPS and a seamless virtual address space of gigabytes, not to     
mention decent graphics, for a change.  A transparent heterogeneous
CPU-environment will round out the improvements

I was wondering whether any of this will change the way we use com-
puters or the kinds of things we do with them.  Most of what I've 
seen so far is people doing the Same Old Things, just faster.  Now
we can ray-trace an image in 5 minutes that used to take an hour; now
we can do a circuit simulation in an hour that used to run overnight;
now we can do a 75-compile 'build' in 5 minutes that used to take hours,
etc.  

I'm concerned that we (or I, anyway) may lack imagination.  The basic 
tools of my trade (software engineer) are compilers, linkers, interactive
debuggers and software control products (DSEE, in this case).  I've
used things like this for years.  The ones I have now are faster, and
fancier than what I had a few years ago but they're not fundamentally
different in concept.  CAD packages allow the user to enter a schematic,
say, and do a simulation or do the routing and ultimately even the chip
geometry, but except that they can do it faster now, and handle more
gates, tighter design rules, etc, they are not fundamentally different
in concept than what engineers were using 5 years ago.  Database systems
still do similar things to what they've always done as well, just faster,
with more data, and better pie-charts (or whatever). 

Does anyone have any thoughts about whether (of if or when) huge leaps 
in compute resources might result in fundamentally *different* ways of
using computers?   We always used to worry about being 'disk-bound' or
'CPU-bound' or 'network-bound'.  Are we in any danger of becoming 
'imagination bound'?

Peter Nelson