[comp.arch] Processing

franka@mmintl.UUCP (04/15/87)

In article <6208@bu-cs.BU.EDU> bzs@bu-cs.UUCP (Barry Shein) writes:
... that with a hypothesized 50 to 1000 x speed improvement, we may have
more processing power than we know what to do with.

I think he's wrong.  Consider what happens when you add natural language and
voice recognition to the user interface of every program.  When you are
supporting a 10,000 by 10,000 pixel screen in graphics mode.  When large
amounts of real-world knowledge are to be built in to a program to enable
intelligent behavior.

Now, these things require varying degrees of software development; but it
seems likely to me that each will also use up immense amounts of computing
power.  No one has yet tried to develop such things for market, because the
hardware required to support them is not available yet.  (Or if they *have*
tried, they have failed because ...)

This is relevant to the "who needs giga-bytes of memory?" controversy, too.
These kinds of applications will require huge data storage, too.  This is a
general phenomenon; for a given level of functionality, there is a tradeoff
between speed and space, but enhanced functionality requires more speed *and*
more memory.

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

bzs@bu-cs.BU.EDU (Barry Shein) (04/17/87)

Posting-Front-End: GNU Emacs 18.41.4 of Mon Mar 23 1987 on bu-cs (berkeley-unix)



No Frank, you misunderstand me. You say "sure we'll need gigaflops, we'll
need it to do natural language and voice recognition..."

That's my whole point. We don't know how to do all that and I don't
believe (completely, trying not to be too dogmatic here) that it's the
lack of cycles that's holding us back.

I simply mean what I said. That other than the increasingly smaller
percentage of number crunchers out there (smaller because the computer
using population is growing [eg. PCs] and not in the cruncher area) we
won't have applications for the majority of users which will utilize
all those cycles (remember folks, I'm talking like 100MIPs desktops.)

To put it another way, you have no proof (not getting stuffy here)
that what's required to do natural language is more cycles. It seems
plausible, but why aren't there any good software systems running on
Crays (eg, or pick your favorite appropriate high-end box.) I'm just
saying it's not obvious the lacking thing is cycles.

Something to do with hardware innovation requiring linear effort while
software innovation requiring exponential effort, but I babble,
nothing to back that up.

	-Barry Shein, Boston University

kent@xanth.UUCP (04/18/87)

Barry,
	I think a bit more imagination is in order in this discussion.
	If the processing speed is provided cheap enough, it is easy to
	envision home uses for the Teraflop micro with the Terabyte memory
	chip.  ;-)  Examples which come to mind immediately:
	
	Flying your home computer in real time through the Mandelbrot set,
	recomputing at the 60 Hz frame rate, 1280 by 1024 by 24 bits display
	time, 2048 iterations deep.
	
	Robot wars with real time ray traced graphics for the kiddies, same
	resolution (you didn't think I was going to buy two of those things
	at a week's pay apiece, did you! ;-).
	
	Landscape design 101, home instrucion course, with fractal rocks,
	texture mapped fence boards, rule grown trees and shrubs, architectural
	database buildings, all ray traced in real time as the user moves stuff
	about in search of a passing grade.
	
	Stock investment with a quick AI rule based search of all the stocks on
	all the markets, based on daily highs and lows for each for the last
	five years, looking for a place to put the family nest egg until
	tomorrow night.
	
	A full inverted text database search of all the periodicals published
	in all areas of science this century, for a high school "civics of
	science" project.
	
	Real time scanning all 2000 available TV channels, looking for a show
	with redeeming social value, using another AI program with a built in
	optimistic streak.  ;-)

	Home weather prediction using data from the new 1 km worldwide grid.

	And the list goes on.  The point being that work expands to consume
	the resources available for it; always has, always will.  ;-)
Kent.

--
The Contradictor	Member HUP (Happily Unemployed Programmers)    // Also
								      // A
Back at ODU to learn how to program better (after 25 years!)	  \\ // Happy
								   \// Amigan!
UUCP  :  kent@xanth.UUCP   or    ...{sun,cbosgd,harvard}!xanth!kent
CSNET :  kent@odu.csnet    ARPA  :  kent@xanth.cs.odu.edu
Voice :  (804) 587-7760    USnail:  P.O. Box 1559, Norfolk, Va 23501-1559

Copyright 1987 Kent Paul Dolan.			How about if we keep the human
All Rights Reserved.  Author grants free	race around long enough to see
retransmission rights, recursively only.	a bit more of the universe?

tpmsph@ecsvax.UUCP (Thomas P. Morris) (04/18/87)

In article <6654@bu-cs.BU.EDU>, bzs@bu-cs.BU.EDU (Barry Shein) writes:
> To put it another way, you have no proof (not getting stuffy here)
> that what's required to do natural language is more cycles. It seems
> plausible, but why aren't there any good software systems running on
> Crays (eg, or pick your favorite appropriate high-end box.) I'm just
> saying it's not obvious the lacking thing is cycles.

Barry, not being an expert on Natural Language Processing or Voice Recognition,
I agree that it _may_ not be the lack of cycles (or memory, for that matter) 
which is holding back these technologies. However, it wouldn't seem very cost-
effective to even do research on these topics using a Cray, or any other 
favorite high-end box (perhaps a Connection Machine? ;-)). To make the 
_software technology_  widely available, I'd think that a desk-top 100MIP,
4Gb machine (say, $10K, 5yrs from now :-> ) would make the research and the 
realization of these technologies more cost effective.

johnson@uiucdcsp.cs.uiuc.edu (04/18/87)

There are a number of programming styles that result in programs that
are small, easy to understand, and slow.  Logic programming and constraint
programming both fit into this catagory, and functional programming
probably does too, except that in the last few years, we have seen a 
number of efficient implementations of functional programming languages.
My favorite language is Smalltalk, and its sudden popularity is due in
no small part to the increase in speed  of cheap processors.  However,
it is still too slow for many applications.  Other interesting languages,
like SETL, suffer from the same problem.

What about 3D graphics?  I've used an IRIS workstation, and I'm sure that
you could sell millions of them if they were only a couple of thousand
dollars apiece.  100 MIPS is probably about right for that kind of
performance.

We need fast processors just to search large databases.  Naturally, you
can keep lots of indexes, but that takes lots of work, and some things
are hard to index.  In general, writing efficient programs is hard work.
It is easier to write clear programs and not worry about efficiency.  We
are still a long way from there.  It won't be long until 100 MIPS will
seem slow.

gerryg@laidbak.UUCP (04/19/87)

In article <6654@bu-cs.BU.EDU> bzs@bu-cs.UUCP writes:
>I simply mean what I said. That other than the increasingly smaller
>percentage of number crunchers out there (smaller because the computer
>using population is growing [eg. PCs] and not in the cruncher area) we
>won't have applications for the majority of users which will utilize
>all those cycles (remember folks, I'm talking like 100MIPs desktops.)

>To put it another way, you have no proof (not getting stuffy here)
>that what's required to do natural language is more cycles. It seems
>plausible, but why aren't there any good software systems running on
>Crays (eg, or pick your favorite appropriate high-end box.) I'm just
>saying it's not obvious the lacking thing is cycles.

If you are saying that there are current machines which have enough
cycles to do natural language processing, but we need some major
breakthroughs in software; I agree completely.  Of course by the time
that happens, we will probably have pc's with as much or more power
as present day super-computers.  I suspect architectural advances will
also play an important role; for example, connection machines, etc.

I really like the idea of connection machines.  In a few years, they will
probably be commodity products like DRAM's are today.  Which reminds me,
I have a draft of Hillis' connection machine paper, and I am interested
in learning about more current work.  If anyone can give me pointers to
papers, etc. I'd appreciate it.  I am especially interested in how the
communication chips work, and algorithms, programming, languages, etc.

Thanks in advance.

gerry gleason

jsp@b.gp.cs.cmu.edu (John Pieper) (04/19/87)

>This is relevant to the "who needs giga-bytes of memory?" controversy, too.
>These kinds of applications will require huge data storage, too.  This is a
>general phenomenon; for a given level of functionality, there is a tradeoff
>between speed and space, but enhanced functionality requires more speed *and*
>more memory.

Yes, and you are arguing for massive parallelism, for "active memories", in
short, for something like the Connection Machine. A hyperactive RISC machine
probably won't be any better at these problems than the CRAY. The complexity
is too great. Besides, look at the most obvious problem: why pay megabucks
for a big, fast, completely underutilized memory? Balancing computation and
I/O is not enough; we must balance these with the amount of memory available.

bzs@bu-cs.UUCP (04/19/87)

Kent,

I still claim almost everything in your list of things we -could- be
doing if we only had extremely high performance PCs fall into one of
the following categories:

	1. Relatively trite (calculate the Mandelbrot set in my
	living room in real-time? ok, maybe someone out there is
	hot to do that, but I wouldn't try to build a market on it.)

	2. Not obvious that hardware technology is the lacking
	component (all the AI examples you give, maybe they would
	run on a 10MIPs box just fine? How can you know? Why don't
	we run these applications today on expensive fast iron? Are
	the owners of said iron just not interested.)

	3. Possibly more in the realm of other technologies, although
	in the right ball-park (eg. scanning huge data bases requires
	fast I/O as much if not more than CPU power, large numbers of
	cooperating CPUs might be more useful in such an endeavor
	than single massive CPUs, similar argument for image processing.)
	The problem, of course, is that we don't know how to use multiple
	CPU systems very well, so we go for the iron.

My contention is simply that sheer CPU power has become vastly
over-rated as a cure-all, and people are starting to realize this.

Again, I am speaking for the vast majority of the users of computers,
I fully accept there is a small percentage for whom no iron will be
fast enough in the foreseeable future. I also do not think we have
reached a plateau quite yet, but we are rapidly approaching some point
where further increases (for most users) will be superfluous w/o
significant software and other technolgy advances, maybe not even then.

	-Barry Shein, Boston University

webber@klinzhai.UUCP (04/19/87)

In article <6741@bu-cs.BU.EDU>, bzs@bu-cs.BU.EDU (Barry Shein) writes:
> 	3. Possibly more in the realm of other technologies, although
> 	in the right ball-park (eg. scanning huge data bases requires
> 	fast I/O as much if not more than CPU power, large numbers of
> 	cooperating CPUs might be more useful in such an endeavor
> 	than single massive CPUs, similar argument for image processing.)
> 	The problem, of course, is that we don't know how to use multiple
> 	CPU systems very well, so we go for the iron.

There is plenty of parallelism in a single cpu, `wires' wait for no
one :-) .  Off board is always slower than on board -- off chip slower
than on chip.  If cpu's were fast enough, one would probably want to
start using smarter management of the interchip communication channels.

> My contention is simply that sheer CPU power has become vastly
> over-rated as a cure-all, and people are starting to realize this.
> 
> Again, I am speaking for the vast majority of the users of computers,
> I fully accept there is a small percentage for whom no iron will be
> fast enough in the foreseeable future. I also do not think we have
> reached a plateau quite yet, but we are rapidly approaching some point
> where further increases (for most users) will be superfluous w/o
> significant software and other technolgy advances, maybe not even then.

Well, computers in general are irrelevant to most of humanity.
However, fast cpu's can offer things of interest to the non-computer
specialist.  For example, current computer chess games are really bad
in the endgame -- there is a fair amount of indication that fast hardware
helps more than clever programming (although maybe no one has been clever
enough yet :-).  Fast cpu's could generate interactive universes --
roaming around in high-dimensional surface graphs would give much
insight into the models of scientists and engineers.  If cpu's were
fast enough, everything you sent to disk could go encrypted -- adding
significantly to the overall security of computer users (of course,
fast cpu's will weaken the security of slow cpu users -- which sounds
like a classic example of a market bootstrapping itself).  Faster
cpu's should mean greater reliability as things like array-bounds and
pointer types can get checked at execution time.  Fast cpu's can run
algorithms to squeeze the maximum bandwidth out of slow i/o channels.

Of course, people who like to manipulate computers directly will win
even bigger.  How would you like your compiler to be able to run
global error correction algorithms instead of local algorithms that go
crazy after the first 4 errors?  How would you like to be able to use
arbitrary context-free grammars to parse your favourite personal
language rather than have to squeeze everything into LALR(1)?  How
would you like to have the computer manipulate numbers as flexibly as
you do by hand (64bit arithmetic? -- give me bignums -- roundoff
errors are a pain)?  How about cpu's fast enough to allow your screen editor 
to compute the optimal update to your terminal screen? or fast enough
to globably anti-alias output on your graphics device?  How would you
like your computer to be fast enough that your could run extensive code
optimization algorithms that generate code fast enough so that on your
machine it runs fast enough to allow you to run extensive code
optimization algorithms ...

Well there is much more, but I think you get the idea.

--------------- BOB (webber@aramis.rutgers.edu ;  BACKBONE!topaz!webber)

jtr485@umich.UUCP (Johnathan Tainter) (04/19/87)

In article <6654@bu-cs.BU.EDU>, bzs@bu-cs.UUCP writes:
> 
> No Frank, you misunderstand me. You say "sure we'll need gigaflops, we'll
> need it to do natural language and voice recognition..."
> 
> That's my whole point. We don't know how to do all that and I don't
> believe (completely, trying not to be too dogmatic here) that it's the
> lack of cycles that's holding us back.
No where in his posting did he say that limited cycles was the ONLY thing
holding back these applications.

> I simply mean what I said. That other than the increasingly smaller
> percentage of number crunchers out there (smaller because the computer
> using population is growing [eg. PCs] and not in the cruncher area) we
> won't have applications for the majority of users which will utilize
> all those cycles (remember folks, I'm talking like 100MIPs desktops.)

Even if these applications don't need 100MIPs for themselves, when you have
a voice synthesizer,
a voice input system,
a background music generator,
a video digitizer input system,
a posture monitoring video digitizer system,
an evolving backdrop for your desktop metaphor
and your application running at once you are going to eat mucho processor time.

But, you might say, "Noone would do that!"  And I say, noone will do that
with what we can offer today (i.e. put in lots of special hardware).  But give
them 100MIPS to work with and find out just how much people will do.  Also
find out how much people will demand once they know it can be done.  There
are already people clammering for voice input systems, and those are still
in the toy phase.

There is one more field where mucho processor in a small package is going to
be required.  Robots.  Classic SF style robots, not the misnamed things they
use in factories.  Also prostheses.

Undoubtedly you have heard the axiom
"Any program will expand to fill all available space.".  If you take this for
the obvious analogy to gases then we have a long way to go before the
distribution of programs throughout the available processing space is sparse.
And until we get it sparse we will never get computers to be another
unremarked tool like a doorknob.

Until we eliminate the need for programmers and software designers/engineers
(like me) we have not taken the computer far enough.
No man is doing his job properly unless he is working to make himself obsolete.

> 	-Barry Shein, Boston University
--j.a.tainter

bzs@bu-cs.BU.EDU (Barry Shein) (04/23/87)

Posting-Front-End: GNU Emacs 18.41.4 of Mon Mar 23 1987 on bu-cs (berkeley-unix)



From: jtr485@umich.UUCP (Johnathan Tainter)

First you quote me...(in re natural language processing)
>> That's my whole point. We don't know how to do all that and I don't
>> believe (completely, trying not to be too dogmatic here) that it's the
>> lack of cycles that's holding us back.

Then you say...
>There is one more field where mucho processor in a small package is going to
>be required.  Robots.  Classic SF style robots, not the misnamed things they
>use in factories.  Also prostheses.

Sigh...you really are just going to ignore everything I am saying and
go scramble for another identical example. Do you know of any SF style
robots which are currently not able to be put into production due to a
lack of cycles? Any prostheses? Any that work slowly or off of a
ridiculously (physically) huge machine for the application (eg. a
prosthesis being driven by a Cray-2)?

No. So this is exactly the same point I answered in your previous
quote of my text about natural language processing.  You don't *know*
that SF style robots or prostheses will take enormous amounts of MIPs,
you just believe it to be true for some reason without having ever
seen a single example. Maybe you're right, we'll just have to wait
and see (how long?)

>Undoubtedly you have heard the axiom
>"Any program will expand to fill all available space."

Yes, but that is not an axiom, it is a shibboleth. An axiom is
something you can prove. You can't even demonstrate one common example
given the next 5 years of processor development. Yes, we have seen it
in the past and even in the present. But it's not obvious that this
extrapolates into the future in a situation like this.

My favorite example is we saw automobiles in the beginning of the
century go from (max) 10MPH to 30MPH to 100MPH. But after that first
(ca.) 30 years of development extrapolating the curve became
ridiculous. The cost of building a car and designing a road system
which would transport the average person at 300MPH became ridiculously
out of reach. Sure, there's a flaw here in that lots of MIPs won't
endanger your life (maybe, I could argue that one also, but it's more
socio-technical) but the point where you cross over beyond
cost/benefit might exist in a similar fashion. And maybe some day
we will build 300MPH passenger cars, *maybe*.

You'll get into more and more expensive technology to build this
1000MIP desktop and there just won't be enough of a market for it.
Your potential customers will say "gee, I dunno, I can't keep my
100MIP PC you sold me last year busy, you have any applications to
justify the cost of this new box?" And the answer may be "no". (don't
quibble the numbers, insert 50MIPs and 100MIPs in there.)

All I am saying is, not that we will never, or that no one will ever
need huge MIPs. I am simply saying that we will probably run out of
useful applications to justify them IN MOST CASES. If it takes
1000MIPs to run an SF robot we probably will have had 1000MIPs for 20
years before anyone demonstrates the need. Will you still be in business?

It's really not all that subtle, is it? Or am I just really riling the
more parochial out there?

Perhaps the problem is that people tend to confuse quantity with
quality, anyone remember the "cup and a half of flavor" ads?

	-Barry Shein

Pae. T: <naturbubu

pase@ogcvax.UUCP (04/23/87)

In article <bu-cs.6872> bzs@bu-cs.BU.EDU (Barry Shein) writes:
> [...]
>Yes, but that is not an axiom, it is a shibboleth. An axiom is
>something you can prove.

I beg your pardon?  According to Webster (1942 ed.)

ax'iom (ak'si.um), n. [From L., fr. Gr. *axioma*, fr. *axioun* to think worthy,
fr. *axios* worthy.]  1. An accepted maxim.  2. *Logic & Math.*  A statement of
a self-evident truth;  thus, the statement that the whole is greater than any
of its parts is an *axiom*.  3. An extablished principle which is universally
received; as, the *axioms* of science.
-- 
Doug Pase   --   ...ucbvax!tektronix!ogcvax!pase  or  pase@Oregon-Grad (CSNet)

ram@wb1.cs.cmu.edu.UUCP (04/23/87)

As to "what are we going to do with all those cycles?", I find the "10 mips
is enough" arguments to be pretty unconvincing.

The "cycles don't limit X, we just don't know how to do X at all" argument
is somewhat flawed in that it ignores the possibility that there may be
fairly easy brute-force solutions that have been ignored due to
computational infeasibility.  It is true that such an algorithm, once
developed, could run slowly on generally avilable hardware, or less slowly
on today's supercomputers.  This argument ignores the problem of getting the
software developed in the first place.

Where are all the researchers who have unlimited supercomputer access to
develop these algorithms in real time?  Consider that the algorithms might
be initially developed in a form that is several orders of magnitude slower
than eventual usable versions.  This means that your researcher with his
personal supercomputer would still find development painfully slow.

Hans Moravec (who works on mobile robots at CMU) gave a talk in which he
argued fairly convincingly that massive cycles make lots of robotic problems
simpler. 

He had an interesting slide which plotted the log of data processing
capacity on the Y axis and the log of data storage capacity on the X axis.
He then plotted the capicities of various animals and computers.
Interestingly they all fell on about the same line, meaning that the ratio
of storage capicity to processing capacity is roughly constant and is
similar between computers and animals.  The bad news for A.I. and robotics
comes in when when you notice that a ~1mip computer is about equal to a
housefly...  I believe that the storage and processing capacity of a human
brain was estimated to be three orders of magnitude greater.

He also plotted in a linear time scale along the bottom.  He included data
points for electromechanical and mechanical data processing and calculating
machines, and showed that the exponential trend predated electronic
computers, going back before the turn of the century.

In order to deomstrate a way in which having lots of cycles could make life
easier, he discussed ways of representing a robot's "mental map" in some
detail.  These are data structures used to answer the question "Is there
anything there?" and "How can I get from here to there without running into
anything?".  Initial attempts used geometric models, representing objects as
polygons and paths as lines.  These methods weren't too successful because
they can't deal with uncertainlty very well; the results of the robot's
sensors tend to have a great deal of uncertainty.

A more successful but computationally intensive method involves breaking
space up into a grid and representing the certainly of an object being in
each location.  This representation is very good at dealing with sources of
uncertainty since you can combine the results of various inputs at each
location and can also apply transformations to smear out the probabilities
to represent things such as uncertainty in how much you have moved.

A successful path plotting algorithm is based on this representation.  It
basically looks for the "water running downhill" path, if the certainty
of an obstruction is considered to be height.

This algorithm also seems to be "embarrassingly parallel", which is a good
thing since we probably will running into diminishing returns on
uniprocessors pretty soon.

Comparisons with automobiles (the classical mature technology) are spurious.
In fact it seems that automobiles are a qualitatively different kind of
technology.  In the entire course of aotomobile development we have only
seen one or two orders of magnitude improvement.  I can think of a few
essential differences:
 -- Physical limits.  The 60,000 mph automobile simply isn't possible.  If
    you made one, it wouldn't be an automobile at all, it would be a 
    spacecraft, which brings us to:
 -- Definition.  The concept of an automobile is much less flexible than
    that of a computer.  An automobile is defined by its design (ground 
    vehicle with ~4 wheels that carries a few passengers) rather than 
    what it does (human transportation).  In contrast, almost any
    information processing device is called a computer.  Some of the 
    parallel architectures being considered seem more different from an IBM
    PC than a 747 is from a VW.  [500 people at 700 mph is 350,000
    person-miles per hour, as opposed to perhaps 240 for a VW.]
 -- External limitations, in particular scale.  An automobile is required to
    carry people, and they are rather inflexible.  If transportaion
    designers could minaturize people and their environment, redesigning
    them to run in liquid helium, then much more impressive improvements in
    transportation might be attainable.

The use of "speed" to refer to information processing capacity also
introduces a compelling but largely spurious analogy with automobiles.  A
faster automobile is harder to use and only gets you there faster.  In
contrast, a faster computer probably won't respond to interactions any
faster; instead it will be easier to use and will let you do things that
simply weren't feasible before.

  Rob

bjorn@alberta.UUCP (04/24/87)

It seems to me like carrying water into an already flooded cellar
to post this reply, but the sentence about axioms is what really
brought the water level to the ground floor.

In article <6872@bu-cs.BU.EDU>, bzs@bu-cs.BU.EDU (Barry Shein) writes:
>>Undoubtedly you have heard the axiom
>>"Any program will expand to fill all available space."
> 
>Yes, but that is not an axiom, it is a shibboleth. An axiom is
>something you can prove.

Please, please!?!  We've seen enough perversion of perfectly
reasonable CS concepts by the PC people in years gone by (not
to mention re-inventing each and every shape of wheel in
existence), let's not start on mathematics (or general use
English for that matter) already.  An axiom is a given even
though mathematicians do not select them frivolously and
sometimes some pretty astounding results have been proven
equivalent to specific axioms for particular theories
(Axiom of choice == Zorn's lemma!  [there are more
results that are "==" to these] anyone? [wrong group!!]).

> My favorite example is we saw automobiles in the beginning of the
> century go from (max) 10MPH to 30MPH to 100MPH. But after that first
> (ca.) 30 years of development extrapolating the curve became
> ridiculous.

This is a ridiculous example, as you note later on (wonder why
it's your favorite).  The whole point is that there are some
fairly well entrenched physical laws (barriers) to contend with.
This analogy, favored as it may be, is therefore less than useless.
It's not even true, for indeed we have gone beyond x mph autos
long ago with those devices that are commonly referred to as airliners,
bullet trains, etc..

> The cost of building a car and designing a road system
> which would transport the average person at 300MPH became ridiculously
> out of reach.

See above.

> Sure, there's a flaw here in that lots of MIPs won't
> endanger your life (maybe, I could argue that one also, but it's more
> socio-technical) but the point where you cross over beyond
> cost/benefit might exist in a similar fashion. And maybe some day
> we will build 300MPH passenger cars, *maybe*.

The crucial term in the above is "cost/benefit",  I don't think
we have any argument with a one man non-existent market, although
I must say that taking a flight through a Mandelbrot set in real
time is not my fantasy for an ideal Saturday night.

> You'll get into more and more expensive technology to build this
> 1000MIP desktop and there just won't be enough of a market for it.

What we have continually (so far) is a see-saw (sp?) of processor
and memory technology.  Wasn't far back that you didn't need any
tricky or expensive designs to get away with 0 wait state memories
in typical state of the art microprocessor systems, wasn't long
before that when you had to have cache, etc. to keep your processor
busy.  Look's like were stuck with some pretty expensive memory
system designs for the next little while, if we want to squeeze
out every last ounce of performance from a processor that is.

> Your potential customers will say "gee, I dunno, I can't keep my
> 100MIP PC you sold me last year busy, you have any applications to
> justify the cost of this new box?" And the answer may be "no". (don't
> quibble the numbers, insert 50MIPs and 100MIPs in there.)

Have you seen how OS's are being implemented these last few days.
Did you ever hear about dividing the iq of dumbest number of
a committee by the number of people on said committee.  If
recent experience is any indication OS's are going to take
a good slice of whatever cycles are present.  Prime opportunity
for slick OS fellows I'd say.

> All I am saying is, not that we will never, or that no one will ever
> need huge MIPs. I am simply saying that we will probably run out of
> useful applications to justify them IN MOST CASES. If it takes
> 1000MIPs to run an SF robot we probably will have had 1000MIPs for 20
> years before anyone demonstrates the need. Will you still be in business?

Yes and I'm just nagging when I bitch (silently and privately)
about the way some people waste the cycles in their Sun's drawing
frivolous pictures 24 hours a day such that I can only get 80-90 %
cpu out of same Sun's instead of the 10000 % I really want.

> It's really not all that subtle, is it? Or am I just really riling the
> more parochial out there?

No it's not subtle at all.  Give me and thousands of others
all the cycles you have left over (preferably all at once
mind you) and we'll be as flushed as cooked lobsters from our
good fortune.  I have plenty of applications that can use
absolutely any amount of cycles that you give me, and no
they're not necessarily graphics, NLP, seismic or weather [or
other PDE stuff] related.

> Perhaps the problem is that people tend to confuse quantity with
> quality, anyone remember the "cup and a half of flavor" ads?

No, was that on TV by any chance?  No wonder I missed it.


			Bjorn R. Bjornsson
			{ubc-vision,ihnp4,mnetor}!alberta!bjorn

bzs@bu-cs.BU.EDU (Barry Shein) (04/25/87)

Posting-Front-End: GNU Emacs 18.41.4 of Mon Mar 23 1987 on bu-cs (berkeley-unix)



>In article <6872@bu-cs.BU.EDU>, bzs@bu-cs.BU.EDU (Barry Shein) writes:
>>>Undoubtedly you have heard the axiom
>>>"Any program will expand to fill all available space."
>> 
>>Yes, but that is not an axiom, it is a shibboleth. An axiom is
>>something you can prove.
>
>Please, please!?!  We've seen enough perversion of perfectly
>reasonable CS concepts by the PC people in years gone by (not
>to mention re-inventing each and every shape of wheel in
>existence), let's not start on mathematics (or general use
>English for that matter) already.

OK, ok, I screwed that up. An axiom is something you take as true,
a given, tautological or otherwise self-evident.

Does this mean everyone complaining about that agrees that "any
program will expand to fill all available space" is an axiom? Or is it
a shibboleth as I claimed?

Or is everyone just jerking off in public?

Maybe conversations on these lists would be more productive if people
would try to address the issues with substantive debate instead of
trying to point out minor slip-ups of no real consequence to the
overall argument.

I think my point was clear enough, that statement is not an axiom in
any sense of the word (my misuse or your correction), it's a homily, a
shibboleth, actually a humorous joke called "Parkinson's Law" that's
analogous to Murphy's Law and many other amusing self-flagellations
that engineers use to try to laugh at their errors. It has no place in
a serious argument, it sheds no light on anything.

	-Barry Shein, Boston University

franka@mntgfx.MENTOR.COM (Frank A. Adrian) (04/28/87)

In article <6872@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>My favorite example is we saw automobiles in the beginning of the
>century go from (max) 10MPH to 30MPH to 100MPH. But after that first
>(ca.) 30 years of development extrapolating the curve became
>ridiculous. The cost of building a car and designing a road system
>which would transport the average person at 300MPH became ridiculously
>out of reach. Sure, there's a flaw here in that lots of MIPs won't
>endanger your life (maybe, I could argue that one also, but it's more
>socio-technical) but the point where you cross over beyond
>cost/benefit might exist in a similar fashion. And maybe some day
>we will build 300MPH passenger cars, *maybe*.
>
>	-Barry Shein, Boston University

I agree completely with the above argument.  Lets talk about constraints as
it applies to I/O bandwidth.  Let's postulate a machine with 4 Tbytes of
memory and a GAF (Gawd Awful Fast) CPU with unlimited bandwidth.  Lets look
at the new bottleneck in the system.  Here I am with a (small :-) 2Tb object
(say something dumb like an atmospheric model (which someone said he wanted
to do at home with data fed in from a worldwide sampling network)), and I
want to send it to a friend.  Lets look at some shirtsleeve calculations.
Let's see how long I have to wait to send this data at different baud rates:

baud    time
----    ----
9600    17Gs     (this is GIGA, folks)
50K     320Ms    (getting better, but still have to wait a while)
50M     320Ks    (down to about 9 hours)
50G     320s.    (right ballpark)

OK, now we're talking about the full effective bandwidth of a good size
communications satellite (using several parallel channels), and it still
takes ~5 minutes to dump half your physical memory.  Now I don't think that
we have enough bandwidth to support all of the I/O bandwidth we are going to
need, and that's the communication infrastructure which needs to be in
place before we even get enough data to need these sizes of memories and CPU
speeds.  Also, if 4 Tbytes is a physical memory, what's a storage device
going to look like?  Electron beams fired just outside the event horizon of
quantum black holes to form storage loops?  Gives a whole new meaning to the
phrase "spin-up".  Just food for thought...

Frank Adrian
Mentor Graphics, Inc.

franka@mmintl.UUCP (Frank Adams) (04/29/87)

In article <6654@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>No Frank, you misunderstand me. You say "sure we'll need gigaflops, we'll
>need it to do natural language and voice recognition..."
>
>That's my whole point. We don't know how to do all that and I don't
>believe (completely, trying not to be too dogmatic here) that it's the
>lack of cycles that's holding us back.
>...
>To put it another way, you have no proof (not getting stuffy here)
>that what's required to do natural language is more cycles. It seems
>plausible, but why aren't there any good software systems running on
>Crays (eg, or pick your favorite appropriate high-end box.) I'm just
>saying it's not obvious the lacking thing is cycles.

We can leave the Crays alone; those machines' time is too valuable to waste
on making their user interface more friendly, especially since the kind of
applications they are used for are not interface intensive anyhow.

There are natural language database systems on mainframes.  My impression is
that they suffer about equally from lack of computing power and lack of
software capability.  They aren't half bad, though.  (By the way, this
opinion is based on seeing canned demos, but no real hands on experience, so
take it for what it's worth.)

It is also worth noting that in many cases, *either* more sophisticated
software or more powerful machines will get you what you want.  For example,
I think it is clear that there are adequate pattern recognition programs
(voice or graphics) which don't run fast enough on hardware with a price low
enough to be feasible.  We might be able to do better with better software;
but we can certainly use what we have now if the hardware is two orders of
magnitude faster for the same price.

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

pase@ogcvax.UUCP (04/30/87)

In article <franka.619> franka@mntgfx.UUCP (Frank A. Adrian) writes:
-[...]
-Let's see how long I have to wait to send this data at different baud rates:
-
-baud    time
-----    ----
-9600    17Gs     (this is GIGA, folks)
-50K     320Ms    (getting better, but still have to wait a while)
-50M     320Ks    (down to about 9 hours)
-50G     320s.    (right ballpark)
-
- [...]
-
-Frank Adrian
-Mentor Graphics, Inc.

Don't forget the box of optical disks sent via some neighbor kid on a bicycle
(or Federal Express, etc.).
-- 
Doug Pase   --   ...ucbvax!tektronix!ogcvax!pase  or  pase@Oregon-Grad (CSNet)

eugene@pioneer.arpa (Eugene Miya N.) (05/04/87)

In article <2119@mmintl.UUCP> Frank Adams writes:

>We can leave the Crays alone; those machines' time is too valuable to waste
>on making their user interface more friendly, especially since the kind of
>applications they are used for are not interface intensive anyhow.

WRONG....;-)!

>There are natural language database systems on mainframes.
>
>Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
>Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

Frank, and every one else:
While I don't like the word friendly, the end user's time is typically
more valuable that the machine, even $20M machines.  The end goal is the
comprehension of the science, and if you can't get your results back in
time, why use a computer?  Consider the days of the 24-hours weather
forecast which took 27-hours to run on a 7600.  Well, I guess it makes
it easy to check your results. ;-)  Make them more friendly.

I don't think we really have good NL systems.

Geez, this new group has really grown.  I wonder why.  I don't want to
unscribe.  Lots of the stuff is useful to listen to.

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene

howard@cpocd2.UUCP (Howard A. Landman) (05/08/87)

In article <franka.619> franka@mntgfx.UUCP (Frank A. Adrian) writes:
>Let's see how long I have to wait to send this data at different baud rates:
>baud    time
>----    ----
>9600    17Gs     (this is GIGA, folks)
>50K     320Ms    (getting better, but still have to wait a while)
>50M     320Ks    (down to about 9 hours)
>50G     320s.    (right ballpark)

In article <1263@ogcvax.UUCP> pase@ogcvax.UUCP (Douglas M. Pase) writes:
>Don't forget the box of optical disks sent via some neighbor kid on a bicycle
>(or Federal Express, etc.).

Yes, even today, for large amounts of data, the speed and cost of moving a
physical medium can be much better than that obtained by transmitting pure
information.  It's instructive to compute the baud rate of a mundane 1600 BPI
tape flown from NY to SF in 6 hours.  And of course, you can easily increase
the effective rate by an order of magnitude by simply sending 10 tapes at once.
For Doug's case, assume an optical disk means a CD, holding 600 MB.

For micro users, the cost of downloading a "free" program from, say,
Compuserve may exceed the cost of buying a disk with that program on it!
-- 
	Howard A. Landman
	...!intelca!mipos3!cpocd2!howard
	howard%cpocd2%sc.intel.com@RELAY.CS.NET  (it worked for RMS!)
	"My copyright left, but my copyleft was right!"

adam@misoft.UUCP (05/13/87)

In article <6872@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>>Undoubtedly you have heard the axiom
>>"Any program will expand to fill all available space."

>Yes, but that is not an axiom, it is a shibboleth. An axiom is
>something you can prove.

No, an axiom is something that you hold to be true, without necessarily
having to prove it. Typically people agree on a set of axioms from which to
work, e.g. opposite angles being equal in Euclidean geometry.
       -Adam.

/* If at first it don't compile, kludge, kludge again.*/

franka@mntgfx.UUCP (05/20/87)

>In article <franka.619> franka@mntgfx.UUCP (Frank A. Adrian) writes:
>>Let's see how long I have to wait to send this data at different baud rates:
>>baud    time
>>----    ----
>>9600    17Gs     (this is GIGA, folks)
>>50K     320Ms    (getting better, but still have to wait a while)
>>50M     320Ks    (down to about 9 hours)
>>50G     320s.    (right ballpark)
>
>In article <1263@ogcvax.UUCP> pase@ogcvax.UUCP (Douglas M. Pase) writes:
>>Don't forget the box of optical disks sent via some neighbor kid on a bicycle
>>(or Federal Express, etc.).
>
In article <673@cpocd2.UUCP> howard@cpocd2.UUCP (Howard A. Landman) writes:
>Yes, even today, for large amounts of data, the speed and cost of moving a
>physical medium can be much better than that obtained by transmitting pure
>information.  It's instructive to compute the baud rate of a mundane 1600 BPI
>tape flown from NY to SF in 6 hours.  And of course, you can easily increase
>the effective rate by an order of magnitude by simply sending 10 tapes at once.
>For Doug's case, assume an optical disk means a CD, holding 600 MB.
>
>For micro users, the cost of downloading a "free" program from, say,
>Compuserve may exceed the cost of buying a disk with that program on it!

In all of this discussion I think my original point got lost (or maybe it was
lost between my brain and the keyboard), which was that we have all of these
companies wanking on about how many MIPs their processor has when in most
cases, their I/O systems were the bottleneck in their last generation of
machines.  Actually, I don't see very much need for an even higher MIPs rate
unless we can get he info in and out of memory in a timely fashion (and
you'd have to have a preety small kid on a bicycle to cart the data out of
memory).  I know that it is tempting to keep adding more MIPs becuase it is
a simple number for the average dweeb in the street to think he understands
when the ol' marketing department comes around wanking (even though it is
meaningless), but those of us who work with systems already know how little
bang for the buck adding more MIPs seems to be getting these days.  The
question I'm throwing out to all of the great system architects out there
is "What can be done to solve the problem of the apalling lack of I/O
bandwidth found on every micro based uniprocessor system out there today?".

Just wondering...

Frank "Mr. Sensitivity" Adrian
Mentor Graphics, Inc.

These views in no way, shape, or form reflect those of my employer.