[comp.society.futures] what to do with all those MIPS

nelson_p@apollo.uucp (04/19/88)

  I wanted to raise a question about how rapid increases in compute
  power available to individual users will affect the way we use
  computers.      

  Just lately, the amount of sheer compute power available to an
  individual user has been taking huge leaps.  While noting that
  MIPS is a poorly defined term (some say Meaningless Indicator
  of Performance / Second), there is no doubt that there are 
  about to be a lot of them out there.   My company (Apollo) recently
  announced a workstation that will offer 40 - 100+ MIPS, depending
  on configuration.  Startups Ardent and Stellar have also announced
  high-performance products and we may reasonably expect that Sun,
  HP, and Silicon-Graphics will have competing products on the market.
  Currently prices for these machines are in the $70K - $90K range 
  but competition and a growing market will, no doubt, lower them.

  Modern workstations also allow the programmer to treat the network
  as a virtual computer.  Paging across the network, subroutine calls
  to other nodes, and distributed processing are all common to
  architectures such as Apollo's.  If I want to do a 'build' or 'make'
  involving dozens of compiles, I can distribute them across the net
  so they will take little more time than one or two compiles on a 
  single machine.  Furthermore, the disk resources of the network,
  which may be many 10's or 100's of gigabytes are all transparently
  accessable to me.   I suspect that ('the network is the computer')
  Sun may offer something along the same lines, and while I think that
  our other major competitors are still playing catch up in this area,
  clearly this is the way of the future. 
 
  A few years ago the compute resources available to a single user 
  may have been 1 or 2 MIPS and a few 10's of megabytes of virtual 
  address space.  A few years from now a typical user will have 100
  MIPS and a seamless virtual address space of gigabytes, not to     
  mention decent graphics, for a change.  A transparent heterogeneous
  CPU-environment will round out the improvements

  I was wondering whether any of this will change the way we use com-
  puters or the kinds of things we do with them.  Most of what I've 
  seen so far is people doing the Same Old Things, just faster.  Now
  we can ray-trace an image in 5 minutes that used to take an hour; now
  we can do a circuit simulation in an hour that used to run overnight;
  now we can do a 75-compile 'build' in 5 minutes that used to take hours,
  etc.  
 
  I'm concerned that we (or I, anyway) may lack imagination.  The basic 
  tools of my trade (software engineer) are compilers, linkers, interactive
  debuggers and software control products (DSEE, in this case).  I've
  used things like this for years.  The ones I have now are faster, and
  fancier than what I had a few years ago but they're not fundamentally
  different in concept.  CAD packages allow the user to enter a schematic,
  say, and do a simulation or do the routing and ultimately even the chip
  geometry, but except that they can do it faster now, and handle more
  gates, tighter design rules, etc, they are not fundamentally different
  in concept than what engineers were using 5 years ago.  Database systems
  still do similar things to what they've always done as well, just faster,
  with more data, and better pie-charts (or whatever)  8-). 

  Does anyone have any thoughts about whether (of if or when) huge leaps 
  in compute resources might result in fundamentally *different* ways of
  using computers?   We always used to worry about being 'disk-bound' or
  'CPU-bound' or 'network-bound'.  Are we in any danger of becoming 
  'imagination bound'?

                                                  --Peter Nelson

bzs@BU-CS.BU.EDU (Barry Shein) (04/19/88)

Ah, my favorite subject...

I've been talking for a year or so with vendors like Encore who are
threatening (?) to deliver "minis" in the range of 1000MIPs or more in
the near future (12..24 months.) I suspect this might breathe new
interest into a currently rather boring mini market. One interesting
question also is what will some of the folks who build mainframes do
when all this happens (although they retain a big lead in disk i/o
performance some of that will surely falter also.) I mean, really,
deliver 10,000MIPS in the same time frame?

No, I don't think that will be the rational response to workstations
with 100MIPs (<$100K) and minis with 1000MIPs ($100K..$500K.) There
will be customers (JC Penneys) who just need the I/O channels of the
big mainframes and computes are secondary (an IBM3090/600 should
deliver around 180MIPs right now, that's not *that* shameful :-), but
I can't help but think there are a lot of customers out there who may
hesitate to blow $10M on a mainframe if they can do it on a $200K mini
(not to mention the power, real estate, operational philosophy [you
don't need 50 people to run a mini like you do a mainframe] etc.)
Seriously, how big can their jobs be? How much can they possibly have
grown since they managed on a 10MIPs mainframe 5 years ago?

Specifically, I have been interested in something I refer to as
"wasteful computing". We have reached a point where ther is probably
more "waste" from idle processors than over-utilization (or are about
to.) The software hasn't kept up (as was predicted by most everyone.)

Note that you should avoid the value-laden meaning of "waste" in
this context. There's nothing wrong with an idle CPU, it just is
interesting.

Modern workstations already exhibit the beginnings of wasteful
computing. If you showed those bitmap screens and the computations
involved in updating them to someone in the 60's they would have run
screaming out the door. Imagine paying for the cycles and
kilocore-ticks for dragging a window across the screen? Probably would
have cost you $15 or so in 1975 at government rates (they have std
rates is all I mean, form A21.)

Here's something specific that occurred to me the other day. Everyone
remember the "histogram" or "frequency" sort? That's a sort algorithm
that runs linear to N (N = # of elements.) It works like this:

	DECLARE BIGARRAY[ADDRESS_RANGE]

	while not EOF
	  READITEM(I)
	  BIGARRAY[I] = BIGARRAY[I] + 1
	end

That's it, when you hit EOF the data is sorted. It occurred to me that
you can now easily sort 24 bit data items on a modest workstation.
Most of this is due to larger memories and not MIPs but I think most
of you old timers would agree that you were taught this sort as
basically a useless algorithm due to its vast memory usage. Hmm,
seems useful again all of a sudden! What happened?!

Anyhow, more later...

	-Barry Shein, Boston University

bowles@LLL-CRG.LLNL.GOV (Jeff Bowles) (04/19/88)

I remember seeing a video in 1985 of the lucasfilms graphics machine
that had so many cycles to spare that its cursor was a buzzing bumblebee.
How much did it take to do that? A fair amount of CPU, especially measured
by what you were used to five years ago.

I keep gauging current memory (10M wasn't uncommon 3-5 years ago on most
VAX-size systems, and 2-4 years ago smart terminals would be delivered with
0.5-2M, and now we're talking about 100M and in some cases, gigabyte memory
on machines) and comparing to the 3/4M we had on the 11/70 back home. There
are those of you who think THAT is a big number, and remember the 1130 days
when 8K or 32K was a big deal. (I may be off in the numbers - the 11/70 was
my first pseudo-real experience with this sort of thing.)

I wonder what's wasteful and what's not. Certainly, if you use the amount
of memory/MIPS/disk to keep you from having to THINK about a problem, you've
become wasteful - several examples follow:
1) Why houseclean and remove/archive unused files if there's loads of disk
   space?
2) Why not use the simplest sorting algorithms all the time, if CPU is easy
   to come by?
3) Why use your language's equivalent to "packed array of..." when you have
   lots of memory?

Many portability arguments come up as answers to these sorts of things - just
because you have it cushy doesn't mean everyone who receives copies of your
software will have it as nice. Also, more memory/disk/MIPS enable you to get
farther into a problem, as opposed to solving it faster (sometimes) - I believ
that Tom Duff quotes a law about graphics, that generating high-res images
will take N seconds, and if you double the speed of the CPU, it'll still take
N seconds - because you're now interested in generating more complex images,
not twice as many of the older images.

	Jeff Bowles

seida%martin-den.ARPA@BUITA.BU.EDU (Steven Seida) (04/19/88)

In response to:

	Date: 18 Apr 88 20:09:00 GMT
	From: apollo!nelson_p%apollo.uucp%beaver.cs.washington.edu%bu-cs.bu.edu@buita.BU.EDU
	Subject: What to do with all those MIPS
	Message-Id: <3b8a861f.44e6@apollo.uucp>
	Sender: info-futures-request%bu-cs.bu.edu@buita.BU.EDU
	To: info-futures@bu-cs.bu.edu
	
	
	  Does anyone have any thoughts about whether (of if or when) huge leaps 
	  in compute resources might result in fundamentally *different* ways of
	  using computers?   ...
	
	                                                  --Peter Nelson
	
I am reading the science fiction book "Necromancer" by ??????????  that 
seems to explore some of the possibilities of computing in the far-
distant future.  The basic concept is that their is a one big network
that everyone can have access to through a user interface that effectively
plugs into your brain.  And that accessing information becomes the real
interest of computer wizards. Solving problems takes a back seat and
is mainly directed at breaking through protection software.

Maybe a little shadow of the present.

					Steven Seida

goldfain@osiris.cso.uiuc.edu (04/20/88)

No.

PJS@grouch.JPL.NASA.GOV (Peter Scott) (04/20/88)

Sorry to burst your bubble, but it takes only a few instructions to
implement a buzzing bumblebee cursor (erase last sprite; increment count
MOD N; display count-th sprite at new coordinates).  Has anyone done this
for the Mac?  Should be easy.

I have read articles on cryptoanalysis, number theory, fluid dynamics,
and ray-tracing graphics that make it plain that there the next few
orders of magnitude of CPU cycles are already spoken for, at least in
those fields.  However, it will be interesting to observe the impact of
two orders of magnitude improvement in performance for personal computers,
and their applications in the home (speech-recognizing vacuum cleaners?
Image-processing toasters?)

Peter Scott (pjs%grouch@jpl-mil.jpl.nasa.gov)

sullivan@vsi.UUCP (Michael T Sullivan) (04/20/88)

In article <3b8a861f.44e6@apollo.uucp>, nelson_p@apollo.uucp writes:
> ...
>   Does anyone have any thoughts about whether (of if or when) huge leaps 
>   in compute resources might result in fundamentally *different* ways of
>   using computers?   We always used to worry about being 'disk-bound' or
>   'CPU-bound' or 'network-bound'.  Are we in any danger of becoming 
>   'imagination bound'?

This always happens when new technology gets not so new.  Brings to mind
the story of how Western Union could have been what AT&T is (or was before
the breakup).  Their response was "We're not in the phone business, we're
in the telegraph business."  When things start getting stale there is always
someone on the horizon with a new way of thinking, brought about because
he/she hasn't used the old way.

I think that parallel computers are going to bring about a big change in
our current way of thinking.  We are going to have to think of programs
that can be written taking advantage of parallellism (sp?), instead of
the current linear way of thinking.

-- 
Michael Sullivan		{uunet|attmail}!vsi!sullivan
				sullivan@vsi.com
HE V MTL

josh@klaatu.rutgers.edu (J Storrs Hall) (04/20/88)

My guess is that "all those MIPS" are going to make all those robots
that everybody thought were coming ten years ago, possible in the 
next ten.  The crux is vision.  There's a host of tasks that you can
do with cheap, low-precision "effectors" if you have visual feedback,
that are virtually impossible without it.  

Sometime in the next ten years, there will be a two-year period where
before, there are virtually no household robots, and after, they're
common.  I reason by analogy to cd's or vcr's, and on the theory
that there are plenty of yuppies out there who would easily spend
$5000 for a household factotum.  (Myself included.)  

We're at the point, I think, where the underlying technology is moving
enough faster than the product developers that a sort of catastrophe-
theory effect happens:  We'll be well beyond the stage where robots
become feasible (and cost-effective) before the marketing departments 
realize they are even possible.  Part of the effect will be due to 
the push-and-flop of household robots of the past decade, which will
cause a justifiable reticence on the part of management.

In common, general-purpose computers, I'm sure we can soak 10 or 20
thousand MIPS into the scheduler and window system as if it were 
never there at all ... :^)

--JoSH

ken@cs.rochester.edu (Ken Yap) (04/20/88)

|2) Why not use the simplest sorting algorithms all the time, if CPU is easy
|   to come by?

Because an O(n log n) sort will still handle a million elements
decently, while your quadratic sort will be running till the the next
eclipse of the moon, even if you have a Cray. And besides, what's so
complicated about mergesort? I still have problems understanding
shellsort.

	Ken

bzs@BU-CS.BU.EDU (Barry Shein) (04/21/88)

>No.

that might be taking the current "just say no" campaign a little too
literally, don't you think? (that was the full text of your message.)

	-B

peter@sugar.UUCP (Peter da Silva) (04/22/88)

Doesn't anyone else remember a recent toy called "Julie", a talking doll
that handles pattern recognition and contains a 33 MIPS digital signal
processor? One thing that will happen with all those MIPS is fancier toys.
-- 
-- Peter da Silva      `-_-'      ...!hoptoad!academ!uhnix1!sugar!peter
-- "Have you hugged your U wolf today?" ...!bellcore!tness1!sugar!peter
-- Disclaimer: These aren't mere opinions, these are *values*.

820785gm@aucs.UUCP (Andrew MacLeod) (04/23/88)

In article <8804191627.AA01551@martin-den.ARPA> seida%martin-den.ARPA@BUITA.BU.EDU (Steven Seida) writes:

>	
>I am reading the science fiction book "Necromancer" by ??????????  that 
>seems to explore some of the possibilities of computing in the far-

the book in question is actually called Neuromancer...., but i dont remember
the author. definately some strangeness in there.

daveb@geac.UUCP (David Collier-Brown) (04/23/88)

In article <8804191303.AA24772@bu-cs.bu.edu> bzs@BU-CS.BU.EDU (Barry Shein) writes:
| Ah, my favorite subject...
| 
| I've been talking for a year or so with vendors like Encore who are
| threatening (?) to deliver "minis" in the range of 1000MIPs or more in
| the near future (12..24 months.) I suspect this might breathe new
| interest into a currently rather boring mini market. One interesting
| question also is what will some of the folks who build mainframes do
| when all this happens (although they retain a big lead in disk i/o
| performance some of that will surely falter also.) I mean, really,
| deliver 10,000MIPS in the same time frame?

  Well, I can see a move away from big, bare, specially-configured
mainframes for TP (Transaction processing) and toward physically
smaller, more powerfull machines with more-or-less ordinary
operating systems instead of TP monitors.  Of course, the company I
work for noticed that a **number** of years ago [1], so I'm not
saying anything new.

  A portion of the market needs a medium or large machine, able to
run a few hundred terminals, with a reasonably large and fast set of
disks, so that they can service businesses which are physically
centralized (eg, all the departments of a large library), but may
have relationships with other, distant, businesses (eg, bank
branches dealing with AMs and clearinghouses).  At the low end,
they can use machines much like current workstations with a couple
of extra, rather dumb, terminals attached.  At the high end, they're
still having to buy machines like the Honeywell DPS-8 (GCOS)
machine. From experience, I'd say that the high-mips minis could get
into the market if they had enough intelligence in inexpensive
front-end processors or front-end machines.

 --dave (and if they buy IBM, they get to regret it) c-b
[1] Geac is a Canadian manufacturer of transaction-processing
    machines, mostly in the library and financial markets. The two
    lines (8000 & 9000) both run fairly normal-looking operating
    systems, and are about as far from CICS and even TPS-8 as you
    can get. Why, we even write operating system code in high-level
    languages and applications in purpose-built ones.  (The
    preceding has been a paid political announcement of the
    we-hate-CICS association (:-))
-- 
 David Collier-Brown.                 {mnetor yunexus utgpu}!geac!daveb
 Geac Computers International Inc.,   |  Computer Science loses its
 350 Steelcase Road,Markham, Ontario, |  memory (if not its mind) 
 CANADA, L3R 1B3 (416) 475-0525 x3279 |  every 6 months.

doug@isishq.UUCP (Doug Thompson) (04/24/88)

 
 UN>From: PJS@grouch.JPL.NASA.GOV (Peter Scott) 
 
 UN>least in 
 UN>those fields.  However, it will be interesting to observe the 
 UN>impact of 
 UN>two orders of magnitude improvement in performance for personal 
 UN>computers, 
 UN>and their applications in the home (speech-recognizing vacuum 
 UN>cleaners? 
 UN>Image-processing toasters?) 
 UN> 
 
Yeah, I kinda think you're on to something there. The decline in price 
of computer hardware continues. Micros in the home are coming to have 
more and more useful and usable computing power. 
 
As an example, I am writing this on an IBM AT in my living room. As I 
write, a uucp mailer is running in the background importing news from a 
Vax at the university. But I still have enough memory and a spare modem 
and com port so I can shell out of emacs here and fetch data from one of 
thousands of computers around the world for which I have phone numbers 
and log-on scripts. 
 
This is a home computer. When the task in the background is not 
exchanging mail and news with other computers it is open for the general 
modem owning public to call up and read news, up or download files, or 
whatever.  
 
Again, this is a home computer. While it probably is slightly 
heavier-duty than the average home computer, with 1Mb RAM and 60Mb hard 
disk, it is by no means a remarkable or unusual machine. Perhaps the 
software is remarkable. It is all experimental, beta-test and whatnot, 
with a tendency toward instability and the odd bug -- but -- it has 
completely displaced the TV since newsgroups are so much more 
interesting. And, with access to the library card-catalogue by modem and 
the IPS news service, it is getting to the point of replacing the 
newspaper and it has replaced a lot of research hours in the library. 
 
As mass storage devices continue to decline in price, and achieve orders 
of magnitude leaps in cpacity, we are rapidly approaching the point 
where the entire information economy could be conducted at a computer 
terminal in the home. Newspapers, books, magazines, television -- all 
can be delivered on data lines and stored on magentic media and 
displayed on a graphics monitor. 
 
This is the home entertainment centre of the future. But it is a very 
different sort of home entertainment centre. More than just a recipient 
of a stream of data, as a TV is, it can store, process sort, correlate 
that data, and allow you to immediately respond, send and receive mail, 
pass on interesting documents, or fire off a complaint to the Prime 
Minister.  
 
I suspect that the biggest impact computers will have on society as a 
whole in the next 25 years will be in transforming the home and the 
information industry by putting significant computing power in the hands 
of everyone. 
 
It was not very many years ago that Usenet meant a Vax, and hundreds of 
thousands of dollars worth of hardware. Today, it is running on PCs. No 
hardware revolution brought this about. Indeed, the software is all PD 
or shareware. As it is perfected and moves beyond beta, the home usenet 
site may not be all that much of an oddity. 
 
Should be great fun! 
 
Well, 2 hours of news at 2400 BAUD has arrived, and unbatching has 
begun. In another 15 minutes it'll be done and I can play with all the 
newest stuff -- and this message will be fired back out to the net. 
 
It doesn't take many MIPs, it takes ingeneous software and a lot of disk 
space. :-) 
------------------------------------------------------------------------ 
Fido      1:221/162 -- 1:221/0                         280 Phillip St.,   
UUCP:     !watmath!isishq!doug                         Unit B-3-11 
                                                       Waterloo, Ontario 
Bitnet:   fido@water                                   Canada  N2L 3X1 
Internet: doug@isishq.math.waterloo.edu                (519) 746-5022 
------------------------------------------------------------------------ 
  

---
 * Origin: ISIS International H.Q. (II) (Opus 1:221/162)
SEEN-BY: 221/0 162 172 

work@dragos.UUCP (Dragos Ruiu) (04/25/88)

In article <1061@aucs.UUCP>, 820785gm@aucs.UUCP (Andrew MacLeod) writes:
> In article <8804191627.AA01551@martin-den.ARPA> seida%martin-den.ARPA@BUITA.BU.EDU (Steven Seida) writes:
> >I am reading the science fiction book "Necromancer" by ??????????  that 
> >seems to explore some of the possibilities of computing in the far-
> 
> the book in question is actually called Neuromancer...., but i dont remember
> the author. definately some strangeness in there.

 William Gibson wrote "Count Zero", "Neuromancer" and "Burning Chrome". All 
 recommended for anyone interested in computers and fiction. Ask about him in
 alt.cyberpunk.
-- 
Dragos Ruiu   ruiu@dragos.UUCP
        ...alberta!dragos!ruiu   "cat ansi.c | grep -v noalias >proper.c"

kessner@tramp.Colorado.EDU (Eric M. Kessner, K.S.C.) (04/28/88)

In article <1061@aucs.UUCP> 820785gm@aucs.UUCP (Andrew MacLeod) writes:
>In article <8804191627.AA01551@martin-den.ARPA> seida%martin-den.ARPA@BUITA.BU.EDU (Steven Seida) writes:
>
>>	
>>I am reading the science fiction book "Necromancer" by ??????????  that 
>>seems to explore some of the possibilities of computing in the far-
>
>the book in question is actually called Neuromancer...., but i dont remember
>the author. definately some strangeness in there.


It was written by William Gibson, and if you haven't read it, I highly reccomend
getting a copy.  It contains some EXTREMELY intesting ideas about the future of
computing, so it's worth picking up even if you don't particularly like SF.

()()()()()()()()()()()()()()()
Eric Kessner                 |  "Oh no!  John's been eaten by rats!"
kessner@tramp.colorado.EDU   |  "You mean he's been 'E-rat-icated'?"
()()()()()()()()()()()()()()()

nelson_p@apollo.uucp (04/28/88)

 One point that has been made on this topic is that we
 are not just technology-bound in getting more use out
 of computers.   We are so far behind using even the 
 *existing* technology that if hardware development were
 to be frozen at its current level, major improvements
 in the use and productivity of computers and related
 technology would continue for years and perhaps decades.

 I've been looking over some Compuserve literature with
 the thought of getting an account.   I noticed they offer
 is an on-line encyclopedia, although it's rather limited
 in it's capabilities.
 
 I would like an on-line encyclopedia where I could look
 up a topic such as 'Beethoven'.  Of course it would supply
 a biography of him and talk about his impact on music.
 But it would also supply a picture of him.  And perhaps it
 might mention that his 5th Symphony with its famous opening,
 'fate knocking' was penned at a time when Europe was in
 turmoil due to the Napoleonic wars.  Perhaps I would like
 to see a map of Napoleans's invasion route to see how close
 Beethoven was to the action at that time.  Perhaps I would
 like to hear the music or get a hard copy of a score.  Much
 other music has been influence by Beethoven's work, from the
 neoclassical all the way to today's rock, pop, and experimental
 work.  It would be nice to follow this musical path and see
 where it led.  

 There is no technological reason why this could not be done
 today over existing phone lines or with existing PC-class 
 computers.  The limitations are primarily lack of standards,
 lack of software, and lack of having the data organized appropri-
 ately for easy retrieval and cross-referencing.

 Graphics are one example. Say you wanted to put a picture of
 the composer in a window in the corner of your screen.  If we
 allowed a 128x128 square for that we would need 16K pixels.
 It's amazing what a good quality image you can get with even
 two bits per pixel (black, white, 2 shades of grey).  At that 
 resolution it would take 34 seconds to send it at 1200 baud. And
 less if it were run-length encoded.  Line drawings would be even
 easier.  A simple graphics command set consisting of moves,
 draws, text and maybe filled polygons would allow a drawing
 of the Beethoven's face or a map of Napoleon's invasion route
 in a few hundred bytes, which could be sent in a few seconds, 
 about the same time it takes to send a few lines of text.

 Sound is another example.  There are any number of ways that
 musical notes could be encoded and sent over a 1200 baud link.  
 The quality of the resulting sound would depend on the sophis-
 tication of the playback hardware the user had.  But certainly
 the ability to make Casio-like sounds, even with chords would
 be easy and not too expensive.

 The really critical part is getting all this into the database
 in the first place.  Subjects have to be linked (hypertext-style?)
 so that it is easy to go from one subject to the next.  The 
 graphical information has to be stored in a standard way or in
 a way that can easily be converted to the appropriate output
 format.  One of the problems that outfits like CIS have is that
 they get their data from many different sources and try to reduce
 it to the lowest common denominator, which is ASCII text.

 I'm not saying that overcoming lack of standards or agreeing on 
 how we want things to work is *easy*.  All I'm saying is that,
 until we do, a lot of neat technology will be very under-utilized.

                                       --Peter Nelson

 PS- I noticed in a recent issue of High Technology magazine that
     various phone companies around the country are starting to
     wire up new construction with fiber-optic cable.  This will
     elimnate the bandwidth problems and allow *really* neat sound
     and graphics.  To take advantage of it we'll still need standards,
     though.

 

erict@flatline.UUCP (eric townsend) (04/30/88)

In article <1061@aucs.UUCP>, 820785gm@aucs.UUCP (Andrew MacLeod) writes:
| In article <8804191627.AA01551@martin-den.ARPA> seida%martin-den.ARPA@BUITA.BU.EDU (Steven Seida) writes:
| 
| >	
| >I am reading the science fiction book "Necromancer" by ??????????  that 
| >seems to explore some of the possibilities of computing in the far-
| 
| the book in question is actually called Neuromancer...., but i dont remember
| the author. definately some strangeness in there.


It's _Neuromancer_, by William Gibson.  Other books by Gibson:
_Count Zero_, a sort-of sequel to _Neuromancer_.
_Burning Chrome_, a collection of short stories, some of which were incorporated
into _Neuro_ and _Zero_.

New book on the way out: _Mona_Lisa_Overdrive_  (I've got my copy pre-ordered!)

Gibson is currently working on the first re-write of the script to Alien III.
Movie based on _New_Rose_Hotel_ to start filming this fall.  Script written
by Gibson and John Shirley (I think).  Gibson's actually involved in the
production of this one.

I don't know that _Neuro_ will give as much help in "what to do with all
the MIPS" as it will with "what should we try to build/interface with?"
-- 
Just another journalist with too many spare MIPS...
"The truth of an opinion is part of its utility." -- John Stuart Mill 
J. Eric Townsend ->uunet!nuchat!flatline!erict smail:511Parker#2,Hstn,Tx,77007

tws@beach.cis.ufl.edu (Thomas Sarver) (05/05/88)

In article <3bbda74b.44e6@apollo.uucp> nelson_p@apollo.uucp writes:
|
| One point that has been made on this topic is that we
| are not just technology-bound in getting more use out
| of computers.   We are so far behind using even the 
| *existing* technology that if hardware development were
| to be frozen at its current level, major improvements
| in the use and productivity of computers and related
| technology would continue for years and perhaps decades.

  [He talks about a flexible encyclopedia access program with comments about
   how best to do so over 1200 baud phone lines.]

| There is no technological reason why this could not be done
| today over existing phone lines or with existing PC-class 
| computers.  The limitations are primarily lack of standards,
| lack of software, and lack of having the data organized appropri-
| ately for easy retrieval and cross-referencing.
 
|
| I'm not saying that overcoming lack of standards or agreeing on 
| how we want things to work is *easy*.  All I'm saying is that,
| until we do, a lot of neat technology will be very under-utilized.
|
|                                       --Peter Nelson
|

The main problem is not standards.  Although that is a valid problem, the real
problem is the software bottleneck.  Since the advent of the computer, there
has been what I call the hardware/software cycle.

First, there was a problem
that people thought the computer could solve.  However, the current hardware was
insufficient for the task.  So the hardware was upgraded/updated.  The software
was written for the task and everyone was happy.  Then someone came up with
another task that the current hardware couldn't properly handle, and the cycle
started over again.

However, now we have reached a point where building new hardware is a matter of
waiting for demand.  Designing new hardware is simply filling out a list of
features w/ some tradoffs (excepting peripherals like CD-ROM, laser printers,
etc.; they are another story).  It's not quite as easy as this but once one is
done, it is done. Whatever flaws exist are because someone didn't ask for the
right features.

Software is the bottleneck.  What a designer asks for is not necessarily what
the product will become.  Software engineering is still more an art than a
science.  The cost associated with building the software is dwarfed by the
huge cost of maintaining it.  Fixing bugs and making the program do what the
user _Really_ wanted in the first place is the bulk of the expense in the
software industry.  Imagine a Apple Co. coming to each Apple owner's house and
doing the latest fix to his Apple ][+ .  Once Apple puts another model out the
door, that's it, finito, the only things left are 1) hardware add-ons, (probably
already thoughtout) and 2) *SOFTWARE SUPPORT*!

The hardware/software cycle has broken down because software can no longer keep
up with hardware development.  One route that IBM took was compatibility between
hardware updates.  Software that took a year and a half to develop runs on
hardware that came out yesterday.  Look at any microcomputer and you'll see
that it took about three years for software packages to take full advantage of
the hardware.  Meanwhile, a new model could come out every year (case in point:
Atari, 1982-1985).

Compatibility is one way to lessen the effects of the hardware/software
disparity.  Another is modularity.  In most microcomputers, graphics are built
in to the system. IBM was the first to decide to make graphics a different card.
This allowed one to upgrade their graphics without buying a new machine.  It
also enabled them, because the newer cards were compatible with the old, to run
their old software on the new cards or buy new software that took advantage of
the new card.

***TURN ON FANCIFUL/FUTURE-LOOKING MODE***
I predict that software will become modular.  Object-oriented programming is the
craze today in the software industry.  It will revolutionize program development
the way that structured programming did more than a decade ago.

With a new graphics board, one would receive a new object definition.  Today it
is called a hardware driver, but in the future its use will be unlimited.

Imagine having a relational database that does simple queries.  Then imagine
being able to add SQL (a gee-whiz-bang query language).  A good example of a
crude attempt at what I am suggesting is the Lotus 1-2-3 add-ons.  Some add new
macro commands; some add pull-down menus and mouse support.  It really is a
matter of time before developers start committing their livelihoods to object
oriented design/implementation.

***TURN OFF FANCIFUL/FUTURE-LOOKING MODE***

BTW, I a working for the Software Engineering Research Center, and we are
looking at some ways to minimize the bottleneck.  One might say we are trying
to get the machine to do more for us much like hardware designers did for their
field.  For more discussion about the possibilities for object-oriented
languages see:

Bertrand Meyer, "Reusability: The Case for Object-Oriented Design," in IEEE
Software, March 1987, p. 50-64.

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
But hey, its the best country in the world!
Thomas W. Sarver

"The complexity of a system is proportional to the factorial of its atoms.  One
can only hope to minimize the complexity of the micro-system in which one
finds oneself."
	-TWS

Addendum: "... or migrate to a less complex micro-system."

oster@dewey.soe.berkeley.edu (David Phillip Oster) (05/06/88)

Sorry, but the bottleneck is hardware: it just isn't fast enough. Software
productivity can be dramatically enhanced by using tools like very high
level languages (The first complete ADA compiler was written in SETL, in
about a week. However, it took weeks to compile anything.)

The largest programs maintained by a lone programmer are written in lisp
and run on high-end lisp machines, because lisp provides an integrated
encivironment for managing complexity.  If you augment the language with a
type system that lets you make sophisticated assertions about the intended
behaviour of the objects, and add a theorum prover to guarantee that your
assertions are consistant, and add a heavily optimizing compiler so that
any program you run often will run fast, and add a decent interpreter, so
that any program you are working on can be run with out waiting for the
optimizing compiler, and add a decent multi-window debugger so you can
find problems, fix them, (and if they don't show up 'til late in an
expensive run, you can patch the data and the code on the fly and complete
the compilation.) and package management tools, and code inspector tools
so you can find useful work in the mammoth database of existing code.

All this software already exists, and is in the public domain. (Since it
was paid for by DARPA grants.)  Yet, I can't use it: I can't afford a
machine that can run this stuff fast enough that I can beat my own time
just writing everything from scratch in LightSpeed C.

Give me a few orders of magnitude increase in performance, and maybe the
balance will change.

I've already done detailed designs for using about 1000 Mips just on the
user interface (dual color-tv displays built into a pair of glasses with
intertial tracking, eye-tracking, and binaural sound. gloves with pressure
sensitive joints and piezo-electric transducers in the fingertips for
simulated pressure output.) (You use that hardware to generate a separate
left eye, right eye 3-d images, the eye-tracking keeps them in place as
the glance and head moves. The gloves let you pick up these simulated
objects, and have them be perceptibly "there" to your touch. Applications:
CAD: just trace what you want in the air. Your finger leaves a visible
trace behind it.
Music: just trace the waveforms you want in the air. Grab them and bend
them into shape.

Give me the hardware, and I'd have the system up in under a year. 

(By
comparison, it took me 2 weeks to clone Scribe (it took Brian Reid over a
year to write it in the first place.) It took me only a week to clone
MacPaint. I've written a WYSIWYG music editor in 2 days. A complete
developement system for the 128K Macintosh, conisting of interpreter,
compiler, assembler, screen-editor, defStruct package, and object-oriented
programming facility took me a month, but I had to learn the Mac, also,
and 128K left only 60k for my program after I'd left room for the
operating system, screen memory, buffers for screen memory for pull down
menus. I used a cross compiler and telecommunications downloading tools I
wrote.

Copyright (c) 1988 by David Phillip Oster, All Rights Reserved
--- David Phillip Oster            --When you asked me to live in sin with you
Arpa: oster@dewey.soe.berkeley.edu --I didn't know you meant sloth.
Uucp: {uwvax,decvax,ihnp4}!ucbvax!oster%dewey.soe.berkeley.edu

tada@athena.mit.edu (Michael Zehr) (05/07/88)

I think the problem is much more complicated than "your hardware is
slowing down my software" or "you software isn't using my hardware."
There are distributed processing systems in which a big slowdown on
compute-intensive processes is network delays (paging a file, waiting
for data, etc).  On the other hand, if 90% of the time the people on
the system are using the machines as fancy typewriters, then who
cares?  They're wasting computrons (CPU cycles) left and right.

One can say that given a 1 GIPS (gillion instructions per second)
machine you can use up all those computrons with a great interactive
interface that will increase programmer efficiency many times, and
the code they generate will still run fast.  On the other hand, you
could use today's hardware and write better languages that will give
the development speed of a 4th generation language with the run-time
performance of a language like C.  

If we really want to see improvements in the way we use computing
power, then we have to continue to improve hardware (my guess would be
that faster networks are one of the top priorities) while improving
software as well.  The idea of throwing computrons at a user interface
until it's fast enough isn't the solution.

michael j zehr
@insert(standard_disclaimer)

oster@dewey.soe.berkeley.edu (David Phillip Oster) (05/08/88)

In a previous posting I described a workstation that uses simple, well
known algorithms to produce a virtual 3-d (visual, audio, and tactile)
workspace.  The i/o devices required are in existence today, and can be
had for under $1k total.  The only problem is, it needs about 2000 MIPS to
give usable performance.  Once you give me that, I've got a design beyond
that for a workstation that can easily soak up 2,000,000 MIPS. It requires
a small advance in fabrication technology for its main i/o subsystem, and
it is, of course, more speculative: being further out, the design isn't as
solid.

I want to live in society somewhat more advanced than
this one. I'm putting my efforts into making that happen as quickly as
possible, but I see it as an on-going process, not a _single_ problem to be
solved for all time. I have some programs I want to run that require to many
MIPs for me to get the interactive repsonse I need. Give me that, and I may
_want_ more, but I'll still be better off than I am now, and will still
have the benefit of being able to run those programs.
 
I see these tools as amplifiers for the creative part of the human mind.
Obviously, having good creativity amplifiers makes it easy to be creative,
by definition. Among the uses of creativity is: designing better creativity
amplifiers. Note the positive feedback here. Eric Drexler's book
"Engines of Creation" describes the domain that I'd be building tools in,
once I get the ones I've described here.  (Cad/Modelling systems that
model systems at the molecule level.)
 
Vernor Vinge's book "Marooned in Real-Time" is novel about a community of
time travellers, with a magic "stasis field" that lets them freeze themselves
for years at a time: they have one-way time travel, into the future only.
As the book opens, the positive feedback process has completed, except
for the village of time travellers, humanity as we know it is completely
gone. There are some fascinating, enticing descriptions, told as flashbacks,
of what the world was like in the years on the steepest part of the
exponential creativity curve. Of course, the higher up the curve you go, the
harder it is to write a story comprehensible to _us_ poor unenhanced people.

(The statis field is magic in the sense that the physical principles that
underly it don't percolate throughout the society. Example: What kind of
motors and batteries are the equivalent of muscle in Asimov's robots?
Name three ways that same technology is used in other forms in his novels.
Real world example: the same physics that give us laser weopons also gives
us laser disks (CDs).)


Now, if I can't get the MIPS, I can waste my time trying to come up with
tricks to recognize and optimize special cases in algorithms that are
perfectly straightforward in the general case. Just don't try to cast
blame on me by telling me the bottleneck is software. I've got the
software, just sitting on the shelf waiting for the MIPS to run it.

Copyright (c) 1988 by David Phillip Oster, All Rights Reserved
--- David Phillip Oster            --When you asked me to live in sin with you
Arpa: oster@dewey.soe.berkeley.edu --I didn't know you meant sloth.
Uucp: {uwvax,decvax,ihnp4}!ucbvax!oster%dewey.soe.berkeley.edu