[comp.graphics] future graphics performance

eugene@nas.nasa.gov (Eugene N. Miya) (05/16/91)

In article <268@rins.ryukoku.ac.jp> will@rins.ryukoku.ac.jp (will) writes:
Sam Uselton's Teraflops computing/Grand Challenges removed
>From what I have seen and heard (from the people doing this research)
>this technology should be availible to the government and big
>industries by 1997 (and maybe by 1995) and into the home a few years
>later.  As I have been told, the biggest problem now is not making
>teraflop computers, but manufacturing techniques must be updated for
>production and quality control standards must be updated for mass prod.
>The reason cited was that these computers use components that require
>more advanced manufacturing technologies and that current facilities
>must be redesigned to meet these requirements.

A couple of years ago, we started a small "tradition" locally
with the Bay Area ACM/SIGGRAPH.  We had an open dinner meeting
which we called The Sutherland Lecture.  Our first Sutherland
was Frank Crow on the topic: "Are There Still Ten Unsolved Problems
in Computer Graphics?" after Ivan's famous Datamation paper
(back in the days apparently [before my time] when Datamation was a
respectable magazine).  One of those Problem was performance.

In comp.arch a couple of years ago (1988?), I asked for guesses as to
when we would have one sustained Teraflop computing.  I keep a tally
of guesses: You are welcome to make your guess  If you would like, I will
place a 97 or a 95 in for you.  I think the earliest guess was
a sustain (not peak) TFLOPS by 1992.  I have several mid 20xx guesses
with a lot of 95 and 93 guesses.  I also have several NEVER guesses.
The guessers are a motley crew of people from all backgrounds but
including respectable people from Cray Research and Supercomputer Systems Inc.

	I will take anyone's guess as to when we have 1 sustained TF, BUT
	think about it first.  Judgment day will come and pass 8^).

I hold a 200x guess.  I am convinced we will have 1 TFLOPS machine
by 2000, but it will be more a peak machine, so like Vietnam, we
will declare victory and get out. ;^)

I also asked one architect if he could build a 1 TF machine today.
He said, sure, for $2 Billion. (N copies of his existing machine,
the user will know how to program this machine...).

Now what's interesting about this is that Ivan Sutherland was involved in
all of this.  He started graphics as we know it, he pushed VLSI development,
networks, etc.  I would have hoped that Rick Beach had published the
lecture by now, but he's probably too busy.
The problem is not simply one of manufacturing (it is one part).
E.g., you don't just take VLSI tools meant for CMOS and use them on GaAs.
We are dealing on a quantum level when we try to build these machines.
Grace Hopper's nanosecond (those one foot long pieces of wire)
are here and now.  Similarly, we have not just jumped from 2 micron
lines to .5 micron lines.  Our verbiage basically agrees, but I call
this "We need more research; the problems do not scale linearly."

The few machines today strain to sustain 1 GFLOPS (certainly not s serial
GFLOP).  Wires in these machines must pass critical tolerances.
If you imagine a 1 GLOPS sequential machine the size of 1 foot cube,
then a 1 TFLOPS sequential machine must equal one of Adm. Grace's salt
grains in size (one of her pico-seconds).  I hardly call this
a manufacturing problem.  Building equivalent parallel architectures
is still an open resarch issue.  The ILLIAC didn't answer all
parallel processor questions.

>Not to forget that many of these manufacturers have warehouses
>full of new equipment ready to be sold.  Worth billions.
>This is one reason that companies do incremental scaling of
>computer technologies.  To get as much money as possible with
>as small an investment as possible.  It's all "Economics".

This apparently happened with the introduction of the IBM 360/65,
but I do not think you can say this is happening right now.
I think that our architects (as few as we have) are trying their
best to build new general and special purpose hardware.
It is a real art as anyone at E&S or SGI can tell you.
Holding back on performance only cuts your throat right now, but
achieving architectural balance isn't easy.  Not every one is capable of
being a computer architect.

From a graphics perspective, if you need the performance of some
Micro-2000 and it uses some exotic technology which may require you
to program in a functional language (unlike C, fortran, basic, etc.)
you will think long and hard if you have an investment in older code.
But if you need the speed, you will jump in, because if you don't,
your competitors will.

There will always be at least 1-2 companies trying to push the
state of the art in architecture and you can't hold them back.

--eugene miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov
  Resident Cynic, Rock of Ages Home for Retired Hackers
  {uunet,mailrus,other gateways}!ames!eugene