[net.college] Overloaded Computing Systems

lincoln@eosp1.UUCP (Dick Lincoln) (12/21/84)

> There is no problem keeping my brain fully utilized.  One instance
> when my brain is clearly underutilized is during those times when
> courses require that I work with the computer, and the computer
> is so overloaded as to waste a great deal of my time.  (Ever typed
> 80 characters, stopped, and watched all 80 of them appear on the
> screen, one at a time?)  To this I object strongly, and it has the
> side effect of turning a potentially useful exercise into something
> to get quickly out of the way.

One simple way to eliminate a significant amount of mainframe cycle
usage of the kind you cite is to eliminate "full duplex" (UNIX "raw"
mode) terminal drive.  Sure, full duplex is convenient, but to cause a
full user task context switch just to echo back a keystroke, as is the
case for Berkeley "vi", is close to as wasteful as you can be.  Large
time sharing systems worked for years with IBM 3270 protocol and the
like, and supported a full range of applications including text editing
and word processing.  You need a little more down-loaded intelligence in
your terminals to cut down on trivial traffic to your central cpu.

While full duplex is no doubt needed for many research projects, mass
student text processing, programming and number crunching can
certainly be conducted without it.

seifert@mako.UUCP (Snoopy) (12/23/84)

> One simple way to eliminate a significant amount of mainframe cycle
> usage of the kind you cite is to eliminate "full duplex" (UNIX "raw"
> mode) terminal drive.  Sure, full duplex is convenient, but to cause a

Since when does "full duplex" equal "raw mode" ?????????????

These are two seperate things, and they are both quite useful.

If you want to make things efficient for the computer,
feel free to do your programming in machine code (no, not
assembly, why waste the computer's time doing your work
for you?) and of course the net is a tremendous waste of cycles.
Let's all go back to batch.  Time-share was fun, but it's
just not as efficient for the computer, now is it? (I willn't
even *mention* APL (opps!) )

We have terminals with 32 bit processors and memory measured
in megabytes (not computers, not workstations,  t e r m i n a l s ),
and there still aren't enought cycles.  There will *never* be
enough cycles. (one of those Murphy's laws things)

So keep the user efficiency / machine efficiency ratio in mind,
and use the appropriate tools for the task, but let's not
go off half cocked eliminating useful tools just because
they aren't as efficient for the computer as some other tool.

		Merry Christmas y'all,
        _____
	|___|		the Bavarian Beagle
       _|___|_			Snoopy
       \_____/		tektronix!tekecs!seifert <- NEW ADDRESS !!!
        \___/

dmt@ahuta.UUCP (d.tutelman) (12/24/84)

CC:         dmt
REFERENCES:  <471@mako.UUCP>

>So keep the user efficiency / machine efficiency ratio in mind,
>and use the appropriate tools for the task, but let's not
>go off half cocked eliminating useful tools just because
>they aren't as efficient for the computer as some other tool.

I echo the sentiment, and would like to suggest that the technology
is reaching a point where we can match tool efficiency to people
efficiency. While raw mode is a problem for expensive machines
which need to be shared to pay for themselves, there obviously
are small, cheap chunks of compute power that you wouldn't mind
burdening with keystroke-catching. (See included portion of
original posting below.)
We ought to be using the terminal (workstation, PC, etc.) to handle
the user interface, and save the shared resource to deal with
transactions (probably more complex than single lines).

>We have terminals with 32 bit processors and memory measured
>in megabytes (not computers, not workstations,  t e r m i n a l s ),
>and there still aren't enought cycles.  There will *never* be
>enough cycles. (one of those Murphy's laws things)

I'm sure you meant Parkinson's Law (originally "Work expands to
fill the time alloted to it," but much more widely applicable.)
Yes, cycles, storage, and bus width (see current debate on 64-bit
micros) are all Parkinsonian to SOMEBODY.

				Dave Tutelman

eugene@ames.UUCP (Eugene Miya) (12/28/84)

> CC:         dmt
> REFERENCES:  <471@mako.UUCP>
> 
> >So keep the user efficiency / machine efficiency ratio in mind,
> >and use the appropriate tools for the task, but let's not
> >go off half cocked eliminating useful tools just because
> >they aren't as efficient for the computer as some other tool.
> 
> I echo the sentiment, and would like to suggest that the technology
> is reaching a point where we can match tool efficiency to people
> efficiency. While raw mode is a problem for expensive machines
> which need to be shared to pay for themselves, there obviously
> are small, cheap chunks of compute power that you wouldn't mind
> burdening with keystroke-catching. (See included portion of
> original posting below.)
> We ought to be using the terminal (workstation, PC, etc.) to handle
> the user interface, and save the shared resource to deal with
> transactions (probably more complex than single lines).
> 
> >We have terminals with 32 bit processors and memory measured
> >in megabytes (not computers, not workstations,  t e r m i n a l s ),
> >and there still aren't enought cycles.  There will *never* be
> >enough cycles. (one of those Murphy's laws things)
> 				Dave Tutelman

As one of the people who restarted this discussion, I feel I have to
respond to this one.  First, I agree with the human to machine cycles analogy
and I like everybody else on the net will push distributing functions
(e.g., into a smarter terminal).

I began with a basic thesis that today's supercomputers can be tomorrow's
micro.  I justified that historically with the first super computers
like ENIACs and so on and that we have many times that power sitting on
our desks [like the Mac I type from].

Interaction cost:
I sit and use a Cray interactively on occasion.  It's really nice.
I ponder what it would be like to have siting on my desk in a small box.
But one thing you have to keep aware of, if you think it's nice,
100 other users will think it's nice, and you may have defeated the
Cray to begin with [my management speaking].
Some people talk about the concept of 'process servers' like file servers
except these fast machines on a net.  The problem is
we don't have very good models of distributed or parallel computing.
This goes for smart terminals [how do you distribute function?].
Will programming my Cray from a distributed system start with my opening
a socket(2)?  Don't sacriface too many Cray cycles for character
interrupts.
Problem: how do you program these distributed systems of the future?

Computer Architectures:
Computers will always have features which are not utilized near 100% of
the time. This does not make them inefficient.  One significant problem
with existing computer architectures is the architecture itself.  Suppose
you wish to time(1)[in the sense of the Unix command] a process.  What you
end up doing is using the machine's own cycles to accomplish this
[the Hawthorne effect].  I have been doing research on multiprocessors
and see this is a common problem.
The CMU builders of the C.mmp and the Cm* discovered this to be a problem
and they plan to rectify this with their next multiprocessor.

In another area related of architecture:
I would almost bet [not quite] that specialized architectural features
such as vector registers will appear on micros [if we call them that]
in 20 years as 'standard equipment'.

In yet another area related of architecture:
On another level, consider what percentage of instruction sets are used
frequently?  Will we see machines with instruction sets in the 1000s?
I think not.  We are beginning to reach conceptual limits like trying
to shrink our keyboards on the same scale we are shinking chips :-).
RISCs are really getting popular.  Maybe of systems have to get big first
before we can refine them and make them small?

If we are to have Crays-on-a-desk, we are going to have to clean up
'sloppy' portions of micros to make them faster machines.  Micros in the past
and to this day get away with a lot because of their 'size.'
Steve Lundstrom, now at Stanford, commented once that supercomputing is
the only area left in computer science where we still count cycles.
The important thing to keep in mind: balance the human and machine
cycles. [is this an extension of "balance mind and body?" :-)]

Lots of problems for PhD thesis.....


--eugene miya
  NASA Ames Research Center
  {hplabs,ihnp4,dual,hao,vortex}!ames!aurora!eugene
  emiya@ames-vmsb.ARPA

henry@utzoo.UUCP (Henry Spencer) (12/30/84)

> One simple way to eliminate a significant amount of mainframe cycle
> usage of the kind you cite is to eliminate "full duplex" (UNIX "raw"
> mode) terminal drive.  ...
> ...time sharing systems worked for years with IBM 3270 protocol and the
> like, and supported a full range of applications including text editing
> and word processing.  You need a little more down-loaded intelligence in
> your terminals to cut down on trivial traffic to your central cpu.

The problem is that "intelligence" is a word that one hesitates to apply
to the 3270 and its ilk.  Down-loaded intelligence is certainly the way
to go, but only if "down-loaded" really means that I can down-load code
into the terminal.  This is not (necessarily) something one wants to do
often, but the point is that the manufacturers cannot be expected to
anticipate what constitutes "intelligent" behavior for an application.
My recollection is that 3270 editor interaction tended to be quite stilted
as a result of the peculiar prejudices of the terminals.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

cdshaw@watrose.UUCP (Chris Shaw) (01/08/85)

The last time I used an IBM 4341 (approx CPU power of a VAX 780) with
80 users (active) to do editing on a 3278, there was a minimal degree
of I/O wait involved. XEDIT was the editor, and I had configured it as
I preferred it, so as to remove some silly quirks.

The last time I used a VAX 780 with 80 users on it, I woke up regretting
the pizza I ate earlier that night. Seriously... the turnaround was sooooo
bad I went home.

The point is this.. mainframe channel-style architecture exacts a small
convenience price for a huge performance improvement.
Real life has confirmed this many times over for me, and that's why I'd
rather use VM/CMS for editing than vi and 4.2bsd, especially if system load
is a problem.

Wether the performance improvement is due to full duplex or not, I cannot
answer, but I can type and read input simultaneously on a 3278, so it looks
all the same to me !!!

				This point of information brought to you by..
					CD Shaw

    

dmt@ahuta.UUCP (d.tutelman) (01/08/85)

REFERENCES:  <773@amdahl.UUCP> <10468@watmath.UUCP> <1312@eosp1.UUCP> <4844@utzoo.UUCP>, <7195@watrose.UUCP>

> The last time I used an IBM 4341 (approx CPU power of a VAX 780) with
> 80 users (active) to do editing on a 3278, there was a minimal degree
> of I/O wait involved......
> The point is this.. mainframe channel-style architecture exacts a small
> convenience price for a huge performance improvement.
> Real life has confirmed this many times over for me, and that's why I'd
> rather use VM/CMS for editing than vi and 4.2bsd, especially if system load
> is a problem.
> ......
> Wether the performance improvement is due to full duplex or not, I cannot
> answer, but I can type and read input simultaneously on a 3278, so it looks
> all the same to me !!!

Once again ..... The performance improvement has little to do with
EITHER full-duplex or "Mainframe channel-style carchitecture". It lies in
the combination of terminal and application:

-	The 3278 terminal does not send every character as it is keyed in.
	In most applications, it doesn't even send lines, but full
	screens. I'm not sure exactly what you're doing, but I bet
	you're using the buffering and screen editing built into
	the 3278 in some useful way.

-	"vi" uses raw input, which interrupts the application every
	keystroke.... VERY consumptive of real time. It is NOT the
	full-duplex transmission (used by UNIX in line input mode
	as well) that's causing the problem. If your mainframe
	application needed to capture arbitrary keystrokes, all the channel
	architecture in the world wouldn't keep it from being a
	drain on real time.

					Dave Tutelman