[comp.sys.amiga.tech] future frame rate issues

bryan@mothra.cs.utexas.edu (Bryan Bayerdorffer) (04/17/88)

In article <8775@agate.BERKELEY.EDU> doug@eris.UUCP (Doug Merritt) writes:
=-
=-=-In article <11254@ut-sally.UUCP> bryan@mothra.cs.utexas.edu writes:
=->  Now, is there still a problem when, as you said,
=->you think of the display device as BEING your one and only memory?
=-
=-Yes, that's the one that I can't see the archetectural subsystem
=-for. How could just this case (neglecting pixels vs. nonpixels) work
=-the way you specified? The reason I can't see it is that it seems
=-like all that happened is that the functionality that used to be
=-in the cpu box (like writing to RAM, then DMA'ing it to a memory
=-mapped display) simply got moved out into the display itself. Yet
=-I get the impression that that's not what you had in mind. But I
=-don't see any significantly different way of doing it inside the
=-display than without.
=-
	Ok, let me see if I can do a better job of explaining this.  Our goal is
to get rid of frames where they are not needed.  The simplest example of this is
just displaying a static picture.  Video displays FORCE the implementation of
frames because they have to be refreshed k times per second.  This means that
something has to DMA a chunk of memory and convert it to video signals k times
per second, regardless of whether there were any changes in the memory or not.
If you DO want to make changes in the memory, these have to be synched to the
frame rate of the video, EVEN if you are capable of updating the memory so fast
that it would appear instantaneous to the eye, if the eye could see directly
into memory.
	What we want to do then, is to get rid of this fixed frame rate imposed
by the display device, and let the frame rate instead be determined by the
application (including zero for static images and/or faster-than-eye updates),
and limited by the rate at which memory can be updated.  To answer your 
question, the functionality in the cpu has not been moved; instead, part of it
has been REmoved--namely the DMA step.  You wanted an architectural subsystem--
here it is:

	Imagine memory as an M x N array of bits, with each bit POTENTIALLY
	corresponding to a pixel.  (Let's talk monochrome to keep it simple.)

	Now, visualize a wire running from EACH memory bit into a switching
	network.  At the other end of the network is an m x n set of wires
	(m << M, n << N), and each of these wires runs to a single 'dot' on
	the display.

	The control input to the switching network is just the pair of base
	registers, so that the network routes an m x n section of the M x N
	memory to the display. 

	This switching network wouldn't quite be a crossbar, but almost.  That
means that it would probably always be impractical to build with actual wires
and switches.  But if you hypothesize some more advanced technology, it
becomes quite feasible.  For instance:  Make your memory into a spherical shell,
where each bit is a glowing/non-glowing dot on the inside of the shell.  Put
several groups of solid-state lenses on the surface of a smaller sphere inside
the shell, so that a 'bank' of lenses can be focused on a desired section of
the shell.  Run a bunch of optical fibers from the central sphere out to the
screen.  The whole thing somewhat resembles a plasma ball  (light sculpture).
	
	Disclaimer:  This isn't my harebrained idea; I just like it.  It's an
idea for optical switching networks in general.  Don't ask me for references--
I don't know of any.  It's just sort of folklore around here.
 ______________________________________________________________________________
/_____/_____/_____/_____/_____/_____/_____/_____/_____/_____/_____/_____/_____/
|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|
_No dark sarcasm in the classroom|_____|_____|_____|_____|_____|_____|_____|___
|____Teachers leave the kids alone__|_____|_____|bryan@mothra.cs.utexas.edu___|
___|_____|_____|_____|___{ihnp4,seismo,...}!ut-sally!mothra.cs.utexas.edu!bryan
|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|

doug@eris (Doug Merritt) (04/18/88)

In article <11277@ut-sally.UUCP> bryan@mothra.cs.utexas.edu writes:
:  Video displays FORCE the implementation
:of frames because they have to be refreshed k times per second. [ ... ]
:	What we want to do then, is to get rid of this fixed frame rate imposed
:by the display device, and let the frame rate instead be determined by the
:application (including zero for static images and/or faster-than-eye updates),

Ah. This in itself clarifies tremendously.

:  For instance:  Make your memory into a spherical shell,
:where each bit is a glowing/non-glowing dot on the inside of the shell.  Put
:several groups of solid-state lenses on the surface of a smaller sphere inside
:the shell, so that a 'bank' of lenses can be focused on a desired section of
:the shell.  Run a bunch of optical fibers from the central sphere out to the
:screen.  The whole thing somewhat resembles a plasma ball  (light sculpture).

Ok, along with your definition of "framelessness" above, this makes perfect
sense. There's quite a few research groups working on optical architectures,
give it a few years and we'll see *something* along these lines. Bell Labs
is one, but last time I asked, they couldn't talk much about their
work. Optical crossbars seemed to be part of it, though.

	Doug Merritt		doug@mica.berkeley.edu (ucbvax!mica!doug)
			or	ucbvax!unisoft!certes!doug