[comp.graphics] ShowScan quality image bit rate

jbn@glacier.STANFORD.EDU (John B. Nagle) (02/01/89)

      My number for the data rate needed was off by a large factor.

	Consider 6000 x 8000 pixels x 60frames/sec x 24 bits/pixel;
the data rate is about 70 gigabits/sec, not a number in the terabit
range as I previously posted.  

					John Nagle

dave@onfcanim.UUCP (Dave Martindale) (02/02/89)

In article <18071@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>
>	Consider 6000 x 8000 pixels x 60frames/sec x 24 bits/pixel;
>the data rate is about 70 gigabits/sec, not a number in the terabit
>range as I previously posted.  

Showscan's aspect ratio is about 2.3, not 1.33.  If we keep the horizontal
resolution of 8000, the vertical resolution would be 3500, not 6000.  So
it's only 40 Gbit.

Also 8000 pixels is 4000 line pairs over 2.072 inches, which is 76 lp/mm.
I would be surprised if a Showscan showed this high a resolution on-screen.
You might be able to be visually "as good as Showscan" with as low as 40
lp/mm, which reduces the bit rate to only 11 Gbit/sec.

Magnetic storage media have a long way to go before they can touch the
information rate of a film projector....

djones@polya.Stanford.EDU (David Jones) (02/05/89)

In article <18071@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>
>	Consider 6000 x 8000 pixels x 60frames/sec x 24 bits/pixel;
>the data rate is about 70 gigabits/sec, ...
>
>					John Nagle

Someone else commented that at some point you'd exceed the limits
of human perception.  Sorry if this is redundant, but here are some
numbers worth thinking about.

2 eyes x 1,000,000 optic nerve fibres x 200 spikes/sec peak firing rate
			= 0.4 gigabits/sec

These figures are approximate, but it's the right order of magnitude.
Of course, the optic nerves are not carrying "pixels",
but more sophisticated "bits" of visual information.  (This estimate
ignores noise, so the true information rate is lower.)


Since a great deal is in fact known about how visual information is
encoded (at early stages at least), images stored as 24 bits/pixel
can be compressed by quite a bit with no perceptible loss
in image quality (provided of course that you know precisely how
the image will be viewed).  It looks like there's room to better Mr Nagle's
pixel representation by 3 orders of magnitude.

David G. Jones
djones@polya.stanford.edu

mcdonald@uxe.cso.uiuc.edu (02/06/89)

>Someone else commented that at some point you'd exceed the limits
>of human perception.  Sorry if this is redundant, but here are some
>numbers worth thinking about.

>2 eyes x 1,000,000 optic nerve fibres x 200 spikes/sec peak firing rate
>			= 0.4 gigabits/sec

>These figures are approximate, but it's the right order of magnitude.

That may be true, but it is irrelevent for general purpose displays,
films, photography, etc. The point is that a the eye has much better
abilities at the center of the field that at the edges. But, for
a general purpose display, you need to have the same resolution 
all across it, as the viewer can move his eyes to look at various places.
Hence you need a lot more bits than the eye uses. How much more
is the interesting question. Let's see - the eye has a resolution
of about 1 minute of arc. Lets say you want a display which is 
one meter square. I would like to view it at a distance of 0.7 meter.
0.7 meter / (57degrees per radian * 60 minutes per degree) = 0.2
millimeter. That is 25,000,000 resolution elements. In the simplest
case (each element can have any color, 256 levels per three primaries)
you need 24 bits per element . That is 600 megabits per static picture.
The question of the number of independent frames per second is tougher,
but let's say 15 per second minimum - that gives 9 gigabits per second.
Make it ten gigabits for a round number. If done right that wouldn't
flicker but motion would appear jerky at that rate, unless "smearing" were
done on the individual frames. But that smearing wouldn't effect the
bit rate, which would remain at 10 Gbits. Then one might try to reduce
the color resolution in small areas, perhaps gettig down to 
3 gibabits per second. That's 10 times more than the 0.4 gbit per second
quoted above, and sounds reasonable, given the ratio that such a display
would cover compared to the region of good vision of the eye.

Now, lets have a 4 pi steradian display - thats 10 times more than the
above! 

bennett@mlogic.UUCP (Bennett Leeds) (02/07/89)

In article <6652@polya.Stanford.EDU>, djones@polya.Stanford.EDU (David Jones) writes:
> In article <18071@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
> >	Consider 6000 x 8000 pixels x 60frames/sec x 24 bits/pixel;
> >the data rate is about 70 gigabits/sec, ...
> >					John Nagle
> Someone else commented that at some point you'd exceed the limits
> of human perception. <stuff deleted about human eye bit-rate> 
> 
> Since a great deal is in fact known about how visual information is
> encoded (at early stages at least), images stored as 24 bits/pixel
> can be compressed by quite a bit with no perceptible loss
> in image quality (provided of course that you know precisely how
> the image will be viewed).  It looks like there's room to better Mr Nagle's
> pixel representation by 3 orders of magnitude.
> 
> David G. Jones
> djones@polya.stanford.edu


	Resolution should encompass more than just the number of pixels per
screen dimension - it should include the range of colors (shades) displayed
at each pixel.  Current TV technology has at least as long a way to go in
this regard (if not longer) than it does in dimensional resolution.

	For instance, turn your best computer monitor off.  What color is the
screen?  Its a gray, hopefully a dark gray.  This is the blackest black your
monitor can produce.  Now develop an unexposed roll of film, and play it
through a projector in a conventional theatre.  I think you'll find this
black quite a bit darker.  You can apply similar tests at the top end,
comparing the whites.  Again, the film white will be much brighter than the
computer monitor's whitest white.

	This stuff reminds me of the debates that appear in Rec.audio about
number of samples and number of bits per sample.  The point is that the
"dynamic range" of film is much greater than TV.  And of course, real life's
dynamic range is greater than film's.  Our eyes can't even handle it all at
once - they have to adjust (ever notice how much brighter car headlights are
at night than they are during the day?).


	Back to the original discussion, if 24 bits per pixel is too much
information for the limited dynamic range of TV, it may not be enough for
film because there are values that film can reproduce that TV technology cannot.
Does anybody have an idea of what the values need to be?  Of course, this
will vary by film emulsion, projector bulb and type, and room environment,
but is there a general rule that anybody has developed by experience?  For
TV work, 15 bits is usually (but not always) enough if all you are doing
is displaying the raw image, but you need more than that if you want to
alter the image via gamma correction, etc.

	Finally, with a screen as large as ShowScan's (and IMAX's as well),
the viewer does not take in the whole screen at once - his eye roams over it,
concentrating on certain portions of it.  It is these *portions* of the
screen that need to be displayed at a higher resolution.   If it is a normal,
non-computer generated scene, areas of the image may be out of focus because
they are too close or too far from the camera/lens.  These areas can get by
with a lower image resolution.  Of course, we then need a system that can
display portions of an image at differring resolutions.

	Perhaps if we shot movies using a system like in Nikon's N8008 35mm
still camera, which uses a computer-controlled area-weighted system that
figures out where your main subject is based on distance and lighting
information and sets the exposure and focus (it's an auto-focus camera)
automatically, we could tie that into our computer system to figure out
which portions of the image we should proportion our resolution to (since
we know where we focused on).

	Bennett Leeds

-- 
         Bennett Leeds                            |
         Media Logic, Inc., Santa Monica, Ca      | These opinions are not
         ARPA: celia!mlogic!bennett@tis.llnl.gov  | my employer's.
         UUCP: ...sdcrdcf!mlogic!bennett          |

myers@hpfcdj.HP.COM (Bob Myers) (02/08/89)

/ hpfcdj:comp.graphics / mcdonald@uxe.cso.uiuc.edu / 11:43 am  Feb  5, 1989 /


>The question of the number of independent frames per second is tougher,
>but let's say 15 per second minimum - that gives 9 gigabits per second.
>Make it ten gigabits for a round number. If done right that wouldn't
>flicker but motion would appear jerky at that rate, unless "smearing" were

If it's a CRT display we're talking about - and some form of CRT seems the
only viable option for high-resolution color in the near future - then the
"frames per second" number becomes much more important.  60 Hz would be the
*minimum* acceptable refresh rate, and for a screen of the proposed size, at
any reasonable brightness, the refresh should be even higher - say, 72 Hz.

But let's stick with 60 Hz, and assume a 300 dpi resolution with an image
1 meter on a side.  This results in approximately a 11000 x 11000 raster;
assuming typical horizontal and vertical blanking numbers (20% and 5% of
the total H and V times, respectively), we get a dot clock of over 9.5 GHz,
and a horizontal sweep rate of 694.7 kHz!!!  (Cold-cathode CRTs offer some
hope of doing away with the sweep, but you'll still need to read the video
out DAMN quickly!)  Note that this says nothing about how fast the digital
store (frame buffer) needs to be updated; it says simply that, assuming
conventional display technology, the video bandwidth is likely a limiting
factor before such high resolutions are reached.



Bob Myers  KC0EW   HP Graphics Tech. Div.|  Opinions expressed here are not
                   Ft. Collins, Colorado |  those of my employer or any other
{the known universe}!hplabs!hpfcla!myers |  sentient life-form on this planet.

efo@pixar.uucp (efo) (02/09/89)

In article <146@mlogic.UUCP> bennett@mlogic.UUCP (Bennett Leeds) writes:
>	Back to the original discussion, if 24 bits per pixel is too much
>information for the limited dynamic range of TV, it may not be enough for
>film because there are values that film can reproduce that TV technology cannot.
>Does anybody have an idea of what the values need to be? 

As a point of reference, the Pixar Image Computer was designed with
the parameters of motion picture film as one of the reference points.
It has twelve bits per channel (4 channels), or 48 bits per pixel.
This appears to be sufficient for many film applications.

There is quite a lot that can be represented on film that cannot
be adequately captured by video. 

jbm@eos.UUCP (Jeffrey Mulligan) (02/10/89)

From article <146@mlogic.UUCP>, by bennett@mlogic.UUCP (Bennett Leeds):
 
> 	Resolution should encompass more than just the number of pixels per
> screen dimension - it should include the range of colors (shades) displayed
> at each pixel.  Current TV technology has at least as long a way to go in
> this regard (if not longer) than it does in dimensional resolution.

Well, sure.  But the original posting included the bits used to encode
gray level (sorry for quoting out of context).
 
> 	For instance, turn your best computer monitor off.  What color is the
> screen?  Its a gray, hopefully a dark gray.  This is the blackest black your
> monitor can produce.  Now develop an unexposed roll of film, and play it
> through a projector in a conventional theatre.  I think you'll find this
                           ^^^^^^^^^^^^ ^^^^^^^
> black quite a bit darker.  You can apply similar tests at the top end,
> comparing the whites.  Again, the film white will be much brighter than the
> computer monitor's whitest white.

This is not a fair comparison.  In a "conventional theatre" presumably
the house lights are down.  If you power down even your worst computer
monitor and then turn off all the room lights it will be quite dark.
(Well, there might be a little long-term phosphor persistence,
but you'd have to dark-adapt to see it).  Similary, with no ambient light
your "white" will look white no matter what the absolute luminance
because your eye will adapt.  This is why television sets have a brightness
knob instead of just fixing it at the maximum:  if you like to watch TV
in a dark room, you will want to turn down the brightness.

A better comparison is the maximum display contrast.  With a reasonable
black level, this will be 100% regardless of the value of the white.

> The point is that the
> "dynamic range" of film is much greater than TV.

Is this true?  I'm not necessarily doubting it, it's just that the dynamic
range of [color] film is not all that large to begin with; I would guess
something slightly more than 1 log unit.  What are the units that are
used to describe this?  Gamma?  Anybody have any hard numbers?  Presumably
this is part of the NTSC standard.

> 	Back to the original discussion,

[impeccable comments about variable-resolution displays deleted]


-- 

	Jeff Mulligan (jbm@aurora.arc.nasa.gov)
	NASA/Ames Research Ctr., Mail Stop 239-3, Moffet Field CA, 94035
	(415) 694-6290

jwi@lzfme.att.com (J.WINER) (02/17/89)

In article <2569@eos.UUCP>, jbm@eos.UUCP (Jeffrey Mulligan) writes:
> From article <146@mlogic.UUCP>, by bennett@mlogic.UUCP (Bennett Leeds):
>  
> 
> > The point is that the
> > "dynamic range" of film is much greater than TV.
> 
> Is this true?  I'm not necessarily doubting it, it's just that the dynamic
> range of [color] film is not all that large to begin with; I would guess
> something slightly more than 1 log unit.  What are the units that are
> used to describe this?  Gamma?  Anybody have any hard numbers?  Presumably
> this is part of the NTSC standard.
> 

TV has a range of about 5 f-stops while film has up to 7 f-stops.
This translates to more detail in the shadow and in the light tones,
i.e. lower contrast.  If you assume that TV is capable of 256
graylevels between black and white, then 2 extra f-stops would mean
that film is capable of 1024 gray levels between the same black and
white.  Since a "normal" scene has about 7 f-stops of range film
will capture it while on TV anything above a certain level goes all
white and anything below a certain level goes solid black.  This is
most ovbious when a film is shown on TV -- a dark scene that would
be visible in the theater goes almost black on the TV.

I'm not sure that the technical definition of an f-stop would be
useful except to note that a bigger lens opening has a smaller
number, and that an increase (decrease) amounts to double (half) the
amount of light.

Jim Winer MT 4G-429 ..!lzfme!jwi