[comp.graphics] Resolution, etc.

jgk@osc.COM (Joe Keane) (11/21/90)

In <2928@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr)
writes:
>  I'm not convinced that you need 24 bits of color for the memory,
>either. Systems like the VGA which have a large palette and a limited
>number of selections work very well. If you look at the output of a 24
>bit color scanner scanning quality photographs, you rarely find an image
>which doesn't map into 256 colors nicely. Very rarely.

In article <1990Nov16.190248.20437@ux1.cso.uiuc.edu> msp33327@uxa.cso.uiuc.edu
(Michael S. Pereckas) writes:
>Note that this gets to be a mess in a hurry if you want to put two
>different images on the screen at once (perhaps in separate windows).
>Pallets can be a problem in multitasking, windowing environments.

I agree, pallettes are a big pain.  A true-color display is a lot easier to
deal with, since there's no global interaction between pixels.  Or you can
agree once and for all on a single pallette, which all windows on the screen
have to use.

Let's assume we have graphics software that understands the concept of
dithering, not like some other software we could mention.  So when you say to
fill a rectangle with a given color, it actually fills it with some neat
pattern alternating between a few colors, such that the overall color is very
close to what you asked for.  Assume that this software is actually competent
at what it does, so that there are no nasty artifacts and the boundaries
between colors work out right.

Now also assume we have a display with good resolution, so the individual
pixels are quite small.  So then, how many colors do we need?  In other words,
what is the largest color difference between neighboring pixels such that it
won't be obvious that the pixels are different colors?  It seems to me that
it's somewhere around 4 bits per component, for a total of 4K colors.  Maybe
this is a little low, though.  What do you all think?

jgk@osc.COM (Joe Keane) (11/22/90)

In article <2928@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen)
writes:
>  I'm not convinced that you need 24 bits of color for the memory,
>either. Systems like the VGA which have a large palette and a limited
>number of selections work very well. If you look at the output of a 24
>bit color scanner scanning quality photographs, you rarely find an image
>which doesn't map into 256 colors nicely. Very rarely.

In article <1990Nov19.195042.19240@imax.com> dave@imax.com (Dave Martindale)
writes:
>I guess it depends on what you're doing.  This certainly isn't true when
>dealing with "photographic-quality" images.  When digitizing transparencies
>or negatives, 24 bits is clearly not enough - I can show you images with
>ugly banding artifacts due to quantization in the dark areas of the image.
>
>Even digitizing at 36 bits (12 bits/component) and then storing 8 bits
>of the logarithm of intensity is not enough in some circumstances.

This is true, but i think you're talking about a slightly different problem.
Quantization of display colors is a small problem, since you can use dithering
to get what you want.  Quantization in scanning is a more serious problem,
since once it's there you can't get rid of it.  One way to avoid the problem
is to add some noise to the sample values before they are quantized.  I'm not
happy about adding noise to my data, but it does work.

>I suspect we have different definitions of "acceptable".  Mine is
>"you can't see any artifacts due to the transfer from film to digital".
>Yours may be "it looks pretty good".  This may be adequate for
>most people dealing with images, but it certainly isn't good enough for
>everyone.

I'm sure most of you have been in stores and seen the various demo images they
use to show off computers.  The quality is so good they look almost like
photographs.  But if you look at the display specifications, they may use only
256 colors for the whole image.  How do they do it?  They certainly don't just
take a 256-color scanner and copy the output pixel by pixel.  They scan the
picture at high resolution, and then spend lots of CPU on a good dithering
algorithm.  They also carefully select the pallette to minimize the errors in
dithering.

>I currently use two monitors - 1600x1200x1 monochrome for editing, and
>1024x768x30 colour for image display.  This seems like a pretty good
>compromise for the moment - high resolution and fast drawing for text,
>while colour images appear much more slowly but with excellent quality.

This is a good combination.  I think that for a given technology, monochrome
monitors will always be sharper than color monitors.  Too bad you can't have
one monitor which switches between the two.