[comp.sys.amiga] 24/32 Bit Color

a45@mindlink.UUCP (Ken Cooper) (10/17/90)

Internet: johnhlee@cs.cornell.edu writes:

>Of course, 32 bit graphics vs. 24 bits won't matter the slightist bit to
>anyone except mostly computer graphics and image processing people.  I
>doubt that most professional video production people will ever need it.
>It makes no difference to 95% of the common PC users out there, despite
>what Marc Bennett says.  :-)

------------------

A "24 bit" adapter allocates 8 bits per color (256 levels) for a total possible
number of colors of 16,777,216. A "32 bit" adapter allocates the same number of
levels per color, but adds an extra 8 bits (256 levels) of ALPHA CHANNEL. Alpha
channel can be thought of as storage for a soft-edged MASK, which can be used
to define what part of the image is or is not background. By allocating 256
levels for the mask or alpha channel, the mask can have 256 levels of
transparency. This is important when attempting to smoothly compose images and
for creating high quality effects.

The above was taken from the recently released version of "Hi-Res QFX", a
collection of 'image processing' functions. It works with Truevisions Targa or
Vista boards and can handle any image size up to 8k by 8k. I have a 32 bit
Vista board and after slowly learning to use this 'alpha' channel, am seeing a
very powerful imaging tool. Please, for your own peace of mind, if you have a
choice between a 16, 24 or 32 bit board, take the 32!

--
  Too much light and we are blinded; too much darkness and we are lost.
+----------------------------------------------------------------------+
| Ken Cooper   KenCoopera45@mindlink.UUCP    COMPUSERVE: 73627,2334    |
+----------------------------------------------------------------------+

byrne@muppet.dnet.ge.com (10/18/90)

I think I need some education.  I was under the (false?) assumption that the
human eye could not distinguish 16 million colors (24 bit).  I assumed 24 bits
was decided as number because it was an even multiple of a byte given the 3
color components (RGB).  Since most displays don't have 16 million pixels, this
gives you a great big palette.  But why 32 bit?  Is 32 bit color used because
it is the word size of most of the popular high performance microprocessors
(i.e 68020+)?  I don't see it buying you anything from a color stand point
because who could distinguish adjacent colors from either 24 or 32 bit
palettes?  Does it help in shading or is it just for data movement speed?
Seems like overkill.  Thanks,

                        -FB

gilgalad@caen.engin.umich.edu (Ralph Seguin) (10/18/90)

I believe that the human eye can distinguish between 7 million
different colors.  24 bit is just convenient.  Generally the
extra bits (say in 32 bit color, or even 48 or 56 bit color)
are for special effects, or sometimes for providing better accuracy
when doing color selection.

			See ya, Ralph


gilgalad@dip.eecs.umich.edu       gilgalad@caen.engin.umich.edu

Ralph Seguin		| "You mean THE Zaphod Beeblebrox?"
536 South Forest	|
Apartment 915		| "No.  Haven't you heard, I come in six packs!"
Ann Arbor, MI 48104	|
(313) 662-4805

johnhlee@bass.cs.cornell.edu (John H. Lee) (10/18/90)

In article <33786@nigel.ee.udel.edu> byrne@muppet.dnet.ge.com writes:
>
>I think I need some education.  I was under the (false?) assumption that the
>human eye could not distinguish 16 million colors (24 bit).  I assumed 24 bits
>was decided as number because it was an even multiple of a byte given the 3
>color components (RGB).  Since most displays don't have 16 million pixels, this
>gives you a great big palette.  But why 32 bit?  Is 32 bit color used because
>it is the word size of most of the popular high performance microprocessors
>(i.e 68020+)?  I don't see it buying you anything from a color stand point
>because who could distinguish adjacent colors from either 24 or 32 bit
>palettes?  Does it help in shading or is it just for data movement speed?
>Seems like overkill.  Thanks,
>
>                        -FB

True, the human eye cannot distinguish 16 million colors individually, but
it can distinquish the difference in shades when placed right next to each
other even when 24 bits are used (8 bits per RGB) and the shades differ by
only one bit.  Human vision enhances contrast at edges, causing a phenomenon
called Mach Bands (I think that's how its spelled) where the lighter side
of an edge seems brighter and the darker side darker.  When the patches of
color are viewed separately, you couldn't tell the difference.

This is especially a problem when trying to do shading, like for the sky
from a dark horizon to a brighter zenith when even with 24 bit resolution
people can see bands of different shades.  32 bit resolution would help
eliminate this.  Why 32 bits?  Because it's a nice multiple of 8 bit bytes
and most processors these days (not just the 680x0 family) handle 32 bit
longwords as a natural data type.  Too bad it's not a nice multiple of 3.

Of course, 32 bit graphics vs. 24 bits won't matter the slightist bit to
anyone except mostly computer graphics and image processing people.  I
doubt that most professional video production people will ever need it.
It makes no difference to 95% of the common PC users out there, despite
what Marc Bennett says.  :-)

-------------------------------------------------------------------------------
The DiskDoctor threatens the crew!  Next time on AmigaDos: The Next Generation.
	John Lee		Internet: johnhlee@cs.cornell.edu
The above opinions of those of the user, and not of this machine.

jjfeiler@nntp-server.caltech.edu (John Jay Feiler) (10/18/90)

byrne@muppet.dnet.ge.com writes:
>I think I need some education.  I was under the (false?) assumption that the
>human eye could not distinguish 16 million colors (24 bit).  I assumed 24 bits
>was decided as number because it was an even multiple of a byte given the 3
>color components (RGB).  Since most displays don't have 16 million pixels, this
>gives you a great big palette.  But why 32 bit?  Is 32 bit color used because
>it is the word size of most of the popular high performance microprocessors
>(i.e 68020+)?  I don't see it buying you anything from a color stand point
>because who could distinguish adjacent colors from either 24 or 32 bit
>palettes?  Does it help in shading or is it just for data movement speed?
>Seems like overkill.  Thanks,

>                        -FB

The human eye can only distinguish about 4 million colors, but the colors are
not distributed evenly across the RGB spectrum.  It is much easier to add extra
bitplanes, and have more colors than we can distinguish, than it is to have
a nonlinear bits-to-colors conversion.  All one needs is 24bits, and we 
have all the colors we could want.  Usually, if a system has more than 24
bits, they are used for text overlay, shadowing, blurring, double-buffering,
and a gadzillion of other fun graphics things.  I have heard of systems that
actually have as many as 268 bitplanes!!

	John (not an expert) Feiler

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (10/18/90)

In article <33786@nigel.ee.udel.edu> byrne@muppet.dnet.ge.com writes:

>I think I need some education.  I was under the (false?) assumption that the
>human eye could not distinguish 16 million colors (24 bit).  I assumed 24 bits
>was decided as number because it was an even multiple of a byte given the 3
>color components (RGB).  Since most displays don't have 16 million pixels, this
>gives you a great big palette.  But why 32 bit?  Is 32 bit color used because
>it is the word size of most of the popular high performance microprocessors
>(i.e 68020+)?  I don't see it buying you anything from a color stand point
>because who could distinguish adjacent colors from either 24 or 32 bit
>palettes?  Does it help in shading or is it just for data movement speed?
>Seems like overkill.  Thanks,
>
>                        -FB

Well, color starts to get pretty subtle at 7 bits per gun, but some people
can still see Mach bands (a kind of color artifact) in 7/21 bit color, so
mostly the human eye is described as having about 7.5 bits of color
sensitivity; some folks need 7, some need 8 bits.

As to 32 bits, most implementations that use bits beyoond 24 either use the
extra bits for overlay line graphics that don't mess with the underlying
color, or else for "alpha" data, a kind of "percent of pixel in use" item
that lets some really nice translucency effects be done when images are
overlaid on one another in composite pictures.  Implementations using at
least 48 bits are in common use in high end systems.  I forgot -- a third
use is to store z-buffer distance in the extra bits, to allow painter's
algorithm to work, though in this case, 8 extra bits is probably not enough.

If that wasn't clear enough, send email and I'll try harder; I'm listening
to the Series with most of my attention -- fifth inning, 4-3 A's, just now.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

zerkle@iris.ucdavis.edu (Dan Zerkle) (10/18/90)

In article <33786@nigel.ee.udel.edu> byrne@muppet.dnet.ge.com writes:
>
>I think I need some education.  I was under the (false?) assumption that the
>human eye could not distinguish 16 million colors (24 bit).

More like 14 million about.  I don't have the exact number handy.  It
depends on the person.  You can, however, distinguish about 116 shades
of pure yellow though.  You get 256 with 24-bit color....

>I assumed 24 bits
>was decided as number because it was an even multiple of a byte given the 3
>color components (RGB).

Very good.  Go to the head of the class.

>Since most displays don't have 16 million pixels, this
>[gives you a great big pallette.  But why 32-bit?]  (whoops, dz messed up)

The extra eight bits are used for information besides color.  For
example: on a NeXT, you get graphical objects floating around the
screen.  Sometimes, one object can be "in front" of the other.  In
this case, you want to know what is the depth of any particular
object, to know which one goes in front.  Also, you want to know the
transparency of any particular pixel.  You can actually move a
"translucent" icon in front of some other thing on your screen and see
the other thing partially obscured.  I hear you can drive a car across
your screen and see stuff through the windows partially distorted
colorwise....  

             Dan Zerkle  zerkle@iris.ucdavis.edu  (916) 754-0240
           Amiga...  Because life is too short for boring computers.

evtracy@sdrc.UUCP (Tracy Schuhwerk) (10/18/90)

From article <33786@nigel.ee.udel.edu>, by byrne@muppet.dnet.ge.com:
> 
  [ Text about human preception of 16 million colors deleted ]
> But why 32 bit?

  The extra planes over 24 could be used as a Z-buffer for 3D object
  rendering.  There are machines on the market that have 80 plane
  graphics (Apollo DN10000 with top of the line graphics hardware) 
  but the planes over 24 are used for Z-buffers etc.o
 
-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
_______________     /        /                    /  | uunet!sdrc!evtracy
   /    (___    _  /_       /_          _   __   /_/ | evtracy@SDRC.UU.NET
  / .  _____)__(__/ /__/_/_/ /__/_/_/__(/__/ (__/ \  +---------------------
     Structural Dynamics Research Corporation (SDRC) - Milford, Ohio
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

johnhlee@bass.cs.cornell.edu (John H. Lee) (10/18/90)

In article <3568@mindlink.UUCP> a45@mindlink.UUCP (Ken Cooper) writes:
>A "24 bit" adapter allocates 8 bits per color (256 levels) for a total possible
>number of colors of 16,777,216. A "32 bit" adapter allocates the same number of
>levels per color, but adds an extra 8 bits (256 levels) of ALPHA CHANNEL. Alpha
>channel can be thought of as storage for a soft-edged MASK, which can be used
>to define what part of the image is or is not background. By allocating 256
>levels for the mask or alpha channel, the mask can have 256 levels of
>transparency. This is important when attempting to smoothly compose images and
>for creating high quality effects.

Okay people, time for a little review.  Yes, I meant it when I said that
32 bits are allocated for 32 bits of COLOR information, 11 bits for green,
11 bits for red, and 10 bits for blue (the human eye is less sensitive for
blue, so blue usually gets the short end of the byte.)

Alpha channels, depth, etc. do require additional bits, but they are not
counted when counting Bits for Color.  They are modifiers that affect how
the pixels are displayed in the final result and pixel priority, etc.,
but not the actual pixel color stored in a frame buffer.

If you want to include these extra modifiers, then I believe several IRIS
models have 48+ bits per pixel, but only 24 bits of color.

Of course, manufacturers may say 32 bits per pixel but have only 24 bits of
RGB.  They mean 32 bits of Pixel Information.  Not the same, but then these
are marketing specs, no?

-------------------------------------------------------------------------------
The DiskDoctor threatens the crew!  Next time on AmigaDos: The Next Generation.
	John Lee		Internet: johnhlee@cs.cornell.edu
The above opinions of those of the user, and not of this machine.

jnmoyne@lbl.gov (Jean-Noel MOYNE) (10/19/90)

        In practical: 24 bit of color is more than enough for everybody. 
Where you can see the difference between colors ? Just do a big shading 
from the top to the bottom of the screen, in one color (like from white to 
black (-:), and try to catch a point where you can see the difference 
between 2 adjacent colors.

    If you see such a point, it's just your mind playing with your eyes ! 
(-: If you still really see the difference between 2 colors, then just add 
a little of dithering and I promise you, all you'll see is a nice smooth 
shading !

     24 bits of color info is all you need. After that, if you got 
alpha-chanel or some other goodies on your frame buffer, it's a technical 
question about producing the images. A different story.

     JNM

--
These are my own ideas (not LBL's)
" Just make it!", BO in 'BO knows Unix'

jnmoyne@lbl.gov (Jean-Noel MOYNE) (10/19/90)

References:<3568@mindlink.UUCP> <47264@cornell.UUCP>
Message-ID: <7647@dog.ee.lbl.gov>
X-Local-Date: Thu, 18 Oct 90 11:30:46 PDT
Sender: usenet@dog.ee.lbl.gov
Organization: Lawrence Berkeley Laboratory
Date: Thu, 18 Oct 90 18:30:45 GMT


        In practical: 24 bit of color is more than enough for everybody. 
Where you can see the difference between colors ? Just do a big shading 
from the top to the bottom of the screen, in one color (like from white to 
black (-:), and try to catch a point where you can see the difference 
between 2 adjacent colors.

    If you see such a point, it's just your mind playing with your eyes ! 
(-: If you still really see the difference between 2 colors, then just add 
a little of dithering and I promise you, all you'll see is a nice smooth 
shading !

     24 bits of color info is all you need. After that, if you got 
alpha-chanel or some other goodies on your frame buffer, it's a technical 
question about producing the images. A different story.

     JNM

--
These are my own ideas (not LBL's)
" Just make it!", BO in 'BO knows Unix'

palmermg@infonode.ingr.com (Michael G. Palmer) (10/19/90)

In article <33786@nigel.ee.udel.edu> byrne@muppet.dnet.ge.com writes:
>
>I think I need some education.  I was under the (false?) assumption that the
>human eye could not distinguish 16 million colors (24 bit).  I assumed 24 bits
>was decided as number because it was an even multiple of a byte given the 3
>color components (RGB).  Since most displays don't have 16 million pixels, this
>gives you a great big palette.  But why 32 bit?  Is 32 bit color used because
>it is the word size of most of the popular high performance microprocessors
>(i.e 68020+)?  I don't see it buying you anything from a color stand point
>because who could distinguish adjacent colors from either 24 or 32 bit
>palettes?  Does it help in shading or is it just for data movement speed?
>Seems like overkill.  Thanks,
>
>                        -FB

I think that your idea about what they eye can see is ok.  But what can
one do with 32 bit color?  Well,  how about 3 bytes (24 bits) for color, and
the other 8 bits for tags, masks, overlay, VLT select, elevation data, or
whatever else might be useful.  ALL of the bits don't have to be color, or
even displayed, to be useful.

Michael Palmer, 
in no way speaking for Integraph Corporation

fhwri%CONNCOLL.BITNET@cunyvm.cuny.edu (10/19/90)

As I understand it, the human eye can distinguish about 4,000,000 different
hues. It's just that 24 bits (8 bits for R,G,B each) is well-suited to color
on computers. It provides 16,777,216 hues...
                                                --rw
                                                fhwri@conncoll.bitnet

brianm (Brian Moffet) (10/19/90)

byrne@muppet.dnet.ge.com writes:


>I think I need some education.  I was under the (false?) assumption that the
>human eye could not distinguish 16 million colors (24 bit).  I assumed 24 bits
>was decided as number because it was an even multiple of a byte given the 3
>color components (RGB).  Since most displays don't have 16 million pixels, this
>gives you a great big palette.  But why 32 bit?  Is 32 bit color used because
>it is the word size of most of the popular high performance microprocessors
>(i.e 68020+)?  I don't see it buying you anything from a color stand point
>because who could distinguish adjacent colors from either 24 or 32 bit
>palettes?  Does it help in shading or is it just for data movement speed?

Well, this is an intresting topic in psycho-bioliogy.  You see,
with 24 bits of RGB triples, assuming 8 bits each, you have
only 256 colors of red, green, and blue.

Now, for most applications, this does very well.  However, when you
are trying to model low-light situations, you get into trouble.
You start having fewer colors available.  The same is true for multiply
lit (lighted?) objects where you do not need the range from black
to full on.  An example of low light situations would be used in 
night time flight simulators.

Toss in what is called Mach Banding, which is the eye enhancing contrast,
and there are people out there which can start to distinguish the difference
between green = 57 and green = 58 out of 256.

In conclusion, 24 bits is really nice, and I doubt that we will see anything
better (cost wise) in the near future, there are times when it is
not enough.  

enough spouting... :-)

brian moffet

jerry@truevision.com (Jerry Thompson) (10/20/90)

All you need is 24 bits for True Color images.  The extra 8 bits are only
used for special effects like blending, crossfading, anti-aliasing, or 
masking.  I know of no company making a 32 bit color board who does not use 
the 8 bits for effects.  

-- 
Jerry Thompson                 |     // checks  ___________   | "I'm into S&M,
"What I want to know is, have  | \\ //   and    |    |    |   |  Sarcasm and
 you ever seen Claude Rains?"  |  \X/ balances /_\   |   /_\  |  Mass Sarcasm."

a708@mindlink.UUCP (Gord Wait) (10/23/90)

You may not be able to 'see' 16 million colors, but your eye can sure pick out
edges where the 'intensity' of one color jumps to the next one in line on
computer graphics!! I used to work for a computer graphics firm and was shown
this effect by the programmers there. This kind of artifact shows up in
graphics where you have a smooth surface gradually shaded to a darker and
darker color. This 'mock-banding' can usually be seen as bands on smooth
objects. Usually dithering (scattering randomly the pixels at the edge of color
change) was used to mask the effect. That was 24 bit video..
32 bit video usually means a 24 bit system with an extra 8 bits for tricks like
overlay, or transparency values, that are used by application software for
convenience. There is one other type of '30 bit` video defined as 4:2:2 digital
video by the broadcast world, where The image is defined by a monochrome signal
(10 bits) and two color difference signals (10 bits each), but most companies
doing 4:2:2 ignore the optional 2 bits per channel and use only 24 bits.....

Gord Wait

kinks@vax1.acs.udel.EDU (Karl E Aldinger) (11/01/90)

...

A television set, displaying a television signal is analog.  A computer    
monitor, displaying a computer image is digital.  The best way to 
approximate the TV's analog picture, is with a larger range of discrete colors.
Obviously, painting a 2D image with 16 million colors is overkill, but to 
render 3D images, 24-bit color is still not overkill.  The reason a television
doesn't have those awful aliasing lines, is because the dithering is inherent
in the generation of the image.  To get an equivilent picture we would need
analog rastor images, but that's not how a computer likes to remember things
(remember our image is always stored digitally unless its on tape.)  This means
that no matter how many colors the eye can detect, (I have normal vision and 
can barely tell the difference between HAM colors) the creation of lifelike, 
non-surrealistic images depends on huge amounts of colors.  

							Karl Aldinger