[comp.graphics] Psycho Graphics

raisa@hila.hut.fi (Olli R{is{) (02/07/91)

In article <2899@charon.cwi.nl> edwin@cwi.nl (Edwin Blake) writes:

>   I think we need a new catch phrase: Subjective Graphics (!), to emphasize
> the human orientation of the subject.

I prefer "Psycho Graphics". Are there any other suggestions? 
(It may be better to discuss the terminology well in advance
 than to suffer from bad mistakes, such as "distributed ray
 tracing", in the future :-) )

Anyway, critisism against physics as the sole base for realistic 
visualization sounds justified. Research in this field has heavily
concentrated on illumination models and geometric modelling. Some
work has been done on slightly more advanced camera models than the
standard "pinhole" (f.g. distributed ray tracing). Our relationship
with the camera (and the perspective projection) may well resemble
the relationship of early car-designers and horse-driven carriages.
 
Olli Raisa

eugene@nas.nasa.gov (Eugene N. Miya) (02/12/91)

In article <1991Feb7.093627.3734@santra.uucp> raisa@hila.hut.fi
(Olli R{is{) writes:
>In article <2899@charon.cwi.nl> edwin@cwi.nl (Edwin Blake) writes:

>>   I think we need a new catch phrase: Subjective Graphics (!), to emphasize
>> the human orientation of the subject.

How about "good-enough" synthesis?

>I prefer "Psycho Graphics". Are there any other suggestions? 
>(It may be better to discuss the terminology well in advance
> than to suffer from bad mistakes, such as "distributed ray
> tracing", in the future :-) )

How about just plain "human factors?"
Why does this remind me of Tony Perkins?.......  Or Mel Brooks?

>Anyway, critisism against physics as the sole base for realistic 
>visualization sounds justified.

Er? A?  Pray tell, what other bases would you suggest?

Simple texture mapping is fine for static images.

I found an interesting book the other day while visiting a friend (a
physicist) in Sant Cruz.  Since Bill is giving a talk locally on
"Exploratory Computer Graphics" I figured he could borrow it.  The book
is on the measurement and characterization of images and pictures.
At lest a 1/3 of the book was devoted to color, spatial, relations, FFTs,
etc. are inside.  I'll post a ref after he give his Thursday talk.

I think we have to develop a group of people with a particular set of
critical eyes.  In the remote sensing community, they are called
photo-interpreters.  An unusual group of people.  For some classes of
user, I call them "computational test pilots."  Like being a plane builder,
you might not be the expert to fly it.  There can also be naive audiences,
and then the in between people, the "regular pilots."

Consider something I pointed out to Charles Harris (Hi Charles, yes, I owe
you some text).  Look at the cover of Pixel.  That's supposed to be the
universe there.  That's the higher luminal light source? 8^)  Do you see
the picture?  I suspect a few of your do, and many don't.  Similar
problems occur on the subatomic end of the scale.  But in the middle,
you get "good-enough graphics."

I'll post the book's reference when I get it back after Friday.

Timmy Leary was also mentioned in the same breath as being at SIGGRAPH,
I think he also attends Hacker's Conference as well.

--e. nobuo miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov
  {uunet,mailrus,other gateways}!ames!eugene
  AMERICA: CHANGE IT OR LOSE IT.

honig@ics.uci.edu (David Honig) (02/14/91)

In article <1991Feb7.093627.3734@santra.uucp> raisa@hila.hut.fi (Olli R{is{) writes:
>In article <2899@charon.cwi.nl> edwin@cwi.nl (Edwin Blake) writes:
>
>>   I think we need a new catch phrase: Subjective Graphics (!), to emphasize
>> the human orientation of the subject.
>
>I prefer "Psycho Graphics". Are there any other suggestions? 
>(It may be better to discuss the terminology well in advance
> than to suffer from bad mistakes, such as "distributed ray
> tracing", in the future :-) )

The term "psychophysics" refers to the objective study of "subjective"
perception ---e.g., the sensitivity of the eye to luminance, color, motion,
etc.  The term is about a century old.  The term can easily accomodate
the measurement of more complex perceptual functions.

Indeed, established psychophysics (from the sensory psychology area) has
been used by computer graphicists et al. ---e.g., the MTF of the human
eye tells you that you don't need as many bits for higher or lower spatial
frequencies.  (Of course, the MTF shifts as luminance varies, its not
linear you know....)

-- 
David Honig
"Tyranny and despotism can be excercised by  many, more rigorously,
more vigorously, and more severely than by one." ---Andrew Jackson

raisa@hila.hut.fi (Olli R{is{) (02/14/91)

In article <1991Feb12.013754.5320@nas.nasa.gov> eugene@wilbur.nas.nasa.gov
(Eugene N. Miya) writes:
> Why does this remind me of Tony Perkins?.......  Or Mel Brooks?
> ... Er? A?  Pray tell, what other bases would you suggest?

I wonder if I dare to mention psychophysics. With this logic it is
probably associated with Donald Duck.

well...here is a *very* simple example called the moon illusion:
The moon (or the sun) appears to be larger when it is near the
horizon. It is a perceptual effect that has nothing to do with
physics. If you photograph the sunset with a normal 50mm lens,
the result is guaranteed to be bad. You really have to magnify
a lot, and to do that the foreground of the scene must be empty.
I am just wondering if there is a reasonable way to take into
account this effect (among others) in 3D-computer graphics.

Olli Raisa

mgreen@cs.toronto.edu (Marc Green) (02/15/91)

All this talk of "Psycho Graphics" shows that many computer graphics
people badly need a basic perception course. Why? Because there is
very little correlation between physics and perception.

Perception is psychological while images are physical. The
relationship between the two complicated and not deterministic. For
example, it is easy to make physically identical lights look different
and physically different lights look identical. The number of quanta
in a light and the wavelength play a surprisingly small role in the
way a light is perceived. The perception of motion, depth, size, etc.
likewise cannot be readily predicted from the physical properties of
an image alone. The mapping from images to perception cannot be
predicted without knowing a great deal about the visual system.

The basic fact to remember is that our perceptions are manufactured in
our heads and depend as much or more on the way our visual systems are
wired together than by the retinal image. It is wasted effort to spend
time worrying about complicated camera models and the like. For
example, fancy models of motion blur simply are unnecessary. The real
question is not how to make computer graphics look like cameras, but
why the blur improves motion perception in the first place. In fact,
quite a bit of psychophysical work has been done in this area if
anybody took the time to look for it in Vision Research, Perception,
or any other other journals concerned with human vision.

Marc Green

eugene@nas.nasa.gov (Eugene N. Miya) (02/15/91)

In article <1991Feb12.013754.5320@nas.nasa.gov> eugene@wilbur.nas.nasa.gov
I asked: (to the question about other bases than physics)
>> Pray tell, what other bases would you suggest?

>I wonder if I dare to mention psychophysics. With this logic it is
>probably associated with Donald Duck.
>
>well...here is a *very* simple example called the moon illusion:

Sure, this is acceptable, but it's still got physics. (not just the word.)
I think here my bottom line is:
	It's dependent on your application of imagery.
I acknowledge that generating synthetic images entertaining uses.
That's why I gave the texture mapping example.  For fun, you only need
mimic a cloud.  But for non-entertainment uses of graphics: mapping,
simulation, analysis, we have to go beyond.  I asked for
clarification: what other models?  You have to be careful about
illusions, this is why magic is relegated to entertainment these days.
You can't build a lot of work on illusions (generalization).

In article <91Feb14.110238est.7256@neat.cs.toronto.edu>
mgreen@cs.toronto.edu (Marc Green) writes:
>All this talk of "Psycho Graphics" shows that many computer graphics
>people badly need a basic perception course. Why? Because there is
>very little correlation between physics and perception.

Agreed that graphics people need courses in perception.  This is
why I gave the example "human factors."  Some of our best attended
past local SIGGRAPH meetings have been on topics like color.

>Perception is psychological while images are physical. The
>relationship between the two complicated and not deterministic. For

Music: to use another entertaining example is the same way.
Not to dump too hard, there is an argument for Renaissance imaging teams
to have artists.  I think this is wrong.  You get one artist's perceptions.
That artist really needs to be a perceptual psychologist.  That's not
enough.  DO not misinterpret me.  I do like art, and I do like abstract
art which makes use of lots of illusions, but the application, use,
value is limited.

>The basic fact to remember is that our perceptions are manufactured in
>our heads and depend as much or more on the way our visual systems are
>wired together than by the retinal image.

True.

> It is wasted effort to spend
>time worrying about complicated camera models and the like. For
>example, fancy models of motion blur simply are unnecessary. The real
>question is not how to make computer graphics look like cameras, but

To come to Olli's defense (why I suggested removing the "very little")
people (humans) are involved in systems like flying airplanes and driving
cars where it is useful and important to know these things.  The effort is
not completely wasted.  If we are only going to make cartoons,
this is acceptable.  I suggest a visit sometime to an airline wherein
you should ask if you can sit in a flight simulator cockpit.

Remember: I only asked for other models.  I still get physics.

--e. nobuo miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov
  {uunet,mailrus,other gateways}!ames!eugene
  AMERICA: CHANGE IT OR LOSE IT.
  In the book Land's Polaroid: I think Sam Goldwyn was shown a movie shot
  in 3-D.  Everyone who saw it "had their socks knocked off."  Goldwyn
  walked out it was no biggie to him.  Months later Land, America's
  second most patented inventor, learned that Goldwyn had a glass eye.

  Blinn points out the need to occasionally look at his quality graphics
  using a B&W screen.  This is a good idea.

  We are developing sophisticated color graphics systems for scientific
  visualization.  For better or worst, most of the scientific community is
  male, and 1/6 males have some degree of color blindness.

uselton@nas.nasa.gov (Samuel P. Uselton) (02/15/91)

In article <91Feb14.110238est.7256@neat.cs.toronto.edu> mgreen@cs.toronto.edu (Marc Green) writes:
>All this talk of "Psycho Graphics" shows that many computer graphics
>people badly need a basic perception course. Why? Because there is
>very little correlation between physics and perception.
>
I agree that studying perception is useful for people making images,
including computer generated images.  "little correlation" seems a little
strongly worded.  Yes (preception of) colors shift based on the background.
Yes sizes seem to vary based on color.  Yes, yes, yes.  But, it seems to me
that understanding the physics of making the images, in order to make images
without artifacts of the creation method, is needed.

>Perception is psychological while images are physical. The
>relationship between the two complicated and not deterministic. For
>example, it is easy to make physically identical lights look different
>and physically different lights look identical. The number of quanta
>in a light and the wavelength play a surprisingly small role in the
>way a light is perceived. The perception of motion, depth, size, etc.
>likewise cannot be readily predicted from the physical properties of
>an image alone. The mapping from images to perception cannot be
>predicted without knowing a great deal about the visual system.
>
Even worse, perception is also subjective (by which I mean it varies from
viewer to viewer even with the same stimulus).

>The basic fact to remember is that our perceptions are manufactured in
>our heads and depend as much or more on the way our visual systems are
>wired together than by the retinal image. 

Different images are more likely to produce different perceptions.  Identical
images at least have a CHANCE of producing the same perception.  Influencing
the CONTENT of the perception does indeed require a great understanding of
visual systems.  And psychology.  And viewing context.  And...
(That's why movie making is an art.  :-) ) 

>It is wasted effort to spend
>time worrying about complicated camera models and the like. 

This statement I take issue with.  If we can generate images from mathematical
models by use of a computer which are identical to images of real objects as
recorded by a camera and played back appropriately, we have accomplished
several things.  We have experimentally verified our understanding of the
physics producing the image.  We have a means for producing images of objects
that may be difficult or impossible to actually photograph.  We have a means
to present the stimulus to the observer in EXACTLY the same way.  Do you
claim that the perception of images will differ (for a particular observer)
even if the stimulus is the same?  If the observer has no information about
whether the image is "real"?

>For
>example, fancy models of motion blur simply are unnecessary. 

Whether something is unnecessary depends on the application.  They may be 
unnecessary FOR WHAT YOU DO and still be a reasonable thing for others to
work on.

>The real
>question is not how to make computer graphics look like cameras, but
>why the blur improves motion perception in the first place. 

There is not ONE real question.  Being able to make the blur, in various ways,
may in fact allow further study of the perception issues!
Please don't be so quick to condemn work, just because it is different than
what you choose to do.

>In fact,
>quite a bit of psychophysical work has been done in this area if
>anybody took the time to look for it in Vision Research, Perception,
>or any other other journals concerned with human vision.
>
>Marc Green

I'm just beginning to nibble at some of this material.  I agree it is 
worthwhile.  I'm just uncomfortable seeing and hearing condemnations of
other areas (close to home).

Sam Uselton		uselton@nas.nasa.gov		ex-prof
employed by CSC		working for NASA (Ames)		speaking for myself

mgreen@cs.toronto.edu (Marc Green) (02/15/91)

>From: uselton@nas.nasa.gov (Samuel P. Uselton)

>This statement I take issue with.  If we can generate images from mathematical
>models by use of a computer which are identical to images of real objects as
>recorded by a camera and played back appropriately, we have accomplished
>several things.  We have experimentally verified our understanding of the
>physics producing the image.  

True. You can produce the desired image. But thay says little about
how the image will be perceived. The question is not whether physics
can be be used to create a particular image. The question is whether
physics can tell you what image you should create in the first place.
And the answer is that physics alone cannot. The eye is not a piece of
film which simply records images.

>We have a means for producing images of objects
>that may be difficult or impossible to actually photograph.  We have a means
>to present the stimulus to the observer in EXACTLY the same way.  Do you
>claim that the perception of images will differ (for a particular observer)
>even if the stimulus is the same?  If the observer has no information about
>whether the image is "real"?

It depends on what you mean by stimulus. A patch of light will
certainly look very different at different times depending on
background, adaptation, etc. For example, simultaneous brightness
contrast. Then there are numerous reversible and ambiguous figures.
There are moving objects which sometime look stationary and stationary
objects which sometime appear to move. The list is endless.

>Whether something is unnecessary depends on the application.  They may be 
>unnecessary FOR WHAT YOU DO and still be a reasonable thing for others to
>work on.

I may not have made my point clear about camera models. The point of
camera models is to make images which look like they were generated by
a movie camera. Why? because the blurred images in movies make motion
appear smoother. Instead of worrying about the rather complicated
calculations necessary to create the camera model, why not try to
_directly_ taylor the computer images to the visual system. Why have
the camera as a middleman in the model? If blurring images make motion
smoother, then find out why an use that information in creating more
relaistic images. The goal, after all, is more realistic images, not
the achievement of some mathematical fidelity among different image
media.

If you know the properties of the visual system, you create images
which are a good match. This results in both better and cheaper
computer graphics. For example, the psychophysical literature on
apparent motion reveals a lot of tricks that could be used to reduce
the rate at which animated scenes need be updated.

Marc Green

stam@dgp.toronto.edu (Jos Stam) (02/15/91)

Samuel P. Uselton writes:
>Marc Green writes:
>>
>>[...]
>>
>>It is wasted effort to spend
>>time worrying about complicated camera models and the like. 
>
>This statement I take issue with.  If we can generate images from mathematical
>models by use of a computer which are identical to images of real objects as
                                       ^^^^^^^^^
>recorded by a camera and played back appropriately, we have accomplished
>several things.  We have experimentally verified our understanding of the
>physics producing the image.  We have a means for producing images of objects
>that may be difficult or impossible to actually photograph.  We have a means
>to present the stimulus to the observer in EXACTLY the same way.  Do you
                                            ^^^^^^^
>claim that the perception of images will differ (for a particular observer)
>even if the stimulus is the same?  If the observer has no information about
>whether the image is "real"?
>

How can you ever be certain that the stimulus are the same? You would have
to check this at the quantum level. Quantum events are all influenced by
the observer, hence it isn't possible to objectively certify the identity
between the photograph and the simulation...

Jos

hr3@prism.gatech.edu (RUSHMEIER,HOLLY E) (02/15/91)

I don't have posting privileges, but the discussion you have
been carrying on interests me a lot.
My view is we need a model of the radiation (i.e. the visible
light, the "physics" part people refer to) and a model of the
sensor. I make some images for people to look at, some images
for a simulation of an infrared sensor to look at, and some
for the simulation of a camera in a robotics system to look
at. Having a sensor model I can determine the accuracy I need
in my radiation model, and avoid chasing down every photon.
People have used psychophysics a lot, i.e. for color and anti-aliasing,
but we need to do more to tie in a model of perception to the
radiosity, or monte carlo solution we do for the radiation so we
don't have to compute forever to get little variations that no
one can see.
-- Holly Rushmeier
hr3@hydra.gatech.edu

osmoviita@cc.helsinki.fi (02/17/91)

In article <27B9DF69.11406@ics.uci.edu>, honig@ics.uci.edu (David Honig) writes:
> 
> Indeed, established psychophysics (from the sensory psychology area) has
> been used by computer graphicists et al. ---e.g., the MTF of the human
                                                    ^^^^^^^^^^^^^^^^^^^^
> eye tells you that you don't need as many bits for higher or lower spatial
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> frequencies.  (Of course, the MTF shifts as luminance varies, its not
  ^^^^^^^^^^^^
> linear you know....) 

True for stationary images, not for moving or flickering images.
Also if you change viewing distance spatial frequencies change. So it is
not wise to cut bits off if you need high quality images. At least 14 bits
per gun is needed to match human vision. So there is already a big cut in
todays "photo-realistic" 24 bit graphics which has only 8 bits per gun.
It is not yet clear where is the limit although so many seem to believe to 
some old misleading results made in simplified conditions. Indeed,
psychophysics has been misused by computer graphicists. So many computer
graphics books tell pure rubbish about human vision.

There is not good enough commercial computer graphics systems to make
psychophysical measurements to show what amount of data human being can see
from image. --- As far as I know. But somebody shoud build an accuracy
graphics card around TriQuint Semiconductors 14-bit, 1 GHz DACs (Computer
Design, Feb 1, 1991).

BTW, was it you, David Honig, who asked about people who study ray traced
images by psychophysics? I would be interested if somebody can give me
images calculated with enough accuracy.

Kari Osmoviita				osmoviita@cc.Helsinki.Fi

rbe@yrloc.ipsa.reuter.COM (Robert Bernecky) (02/19/91)

A few people have recently noted the presence of color blindness
(That's colour blindness for the canucks in the audience)
in males. Do you think that explains the existence of so many
ugly neckties?

Bob Bernecky
Snake Island Research Inc.

euaneg@eua.ericsson.se (Nils-Erik.Gustafsson) (02/20/91)

Eugene N. Miya writes:
 
  We are developing sophisticated color graphics systems for scientific
  visualization.  For better or worst, most of the scientific community is
  male, and 1/6 males have some degree of color blindness.

Not to be nit-picking, but it's not quite as bad as 1/6 (17%).
The literature seems to agree upon 6-8% (almost all having a
green/red-deficiency).

Nils-Erik (Gustafsson)
ELLEMTEL Telecom Systems Lab

mgreen@cs.toronto.edu (Marc Green) (02/21/91)

>From: euaneg@eua.ericsson.se (Nils-Erik.Gustafsson)

>>Eugene N. Miya writes:
>>
>>  We are developing sophisticated color graphics systems for scientific
>>  visualization.  For better or worst, most of the scientific community is
>>  male, and 1/6 males have some degree of color blindness.

>Not to be nit-picking, but it's not quite as bad as 1/6 (17%).
>The literature seems to agree upon 6-8% (almost all having a
>green/red-deficiency).
>
>Nils-Erik (Gustafsson)
>ELLEMTEL Telecom Systems Lab

It's not that simple. It depends on whether you are talking about the
truly "color blind" (dichromats and monochromats, who are missing a
pigment or two) or merely color anomalous (trichromats who have all
three pigments but with reduced amount or shifts in spectral
sensitivity). The 6-8% number includes the anomalous trichromats, I
believe. So the real number of color blind people is really quite
small. To make life more complicated, color-blinds fall into two
groups (and a rare third group).

However, it is also a mistake to think that females are never color
blind. The last I looked, there was a lot of research saying that
females who carry the gene for color blindness often have mixed
patches of normal and abnormal retina. They may pass crude color
screenings (like Ishihara) but still do not have normal color vision.

In short, attempts to take color blindness into account when creating
visual displays is probably hopeless. There are too few people and
they fall into too many different groups, each of which would require
different modifications.

Marc Green
Trent University

sg04@harvey.gte.com (Steven Gutfreund) (02/21/91)

In article <1991Feb20.123009.5269@eua.ericsson.se>, euaneg@eua.ericsson.se (Nils-Erik.Gustafsson) writes:
> Eugene N. Miya writes:
>   We are developing sophisticated color graphics systems for scientific
>   visualization.  For better or worst, most of the scientific community is
>   male, and 1/6 males have some degree of color blindness.
> 
> Not to be nit-picking, but it's not quite as bad as 1/6 (17%).
> The literature seems to agree upon 6-8% (almost all having a
> green/red-deficiency).

Now as I understand it, color-blind people tend to have a better
sense of depth discrimination. (i.e. in WW2 they were used
to detect camouflage, I do not know if they are used this
way in Desert Storm). Questions:

1. Is this something genetic to color-blind individuals?

2. Is it something that could be trained, and then exploited
   by scientific visualization packages?

3. What does it mean to have a heighted sense of depth perspective,
   Does it help in standard computer graphic output (line, shaded
   polygon, texture maped) diagrams, or is it only exploitable
   when a higher level of visual realizim is avialable?

4. How would you switch from color information to perspective
   information for a color blind person?

5. Should TekCMS do a color test on the user :-)


-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Yechezkal Shimon Gutfreund		 		  sgutfreund@gte.com
GTE Laboratories, Waltham MA			    harvard!bunny!sgutfreund
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

sm2x@vax5.cit.cornell.edu (02/22/91)

> In article <1991Feb20.123009.5269@eua.ericsson.se>, euaneg@eua.ericsson.se (N

urlichs@smurf.sub.org (Matthias Urlichs) (02/25/91)

In comp.graphics, article <91Feb20.131305est.6899@neat.cs.toronto.edu>,
  mgreen@cs.toronto.edu (Marc Green) writes:

< In short, attempts to take color blindness into account when creating
< visual displays is probably hopeless. There are too few people and
< they fall into too many different groups, each of which would require
< different modifications.
< 
The guideline most relevant here is from the Apple Hman Interface guidelines
-- not everyone has a color or monochrome screen, so you'll have to design
your system in such a way that the color either isn't the only source of the
information, or that it can be replaced with other modes.

-- 
Matthias Urlichs -- urlichs@smurf.sub.org -- urlichs@smurf.ira.uka.de     /(o\
Humboldtstrasse 7 - 7500 Karlsruhe 1 - FRG -- +49+721+621127(0700-2330)   \o)/

heffron@falstaff.css.beckman.com (Matt Heffron) (03/01/91)

In <91Feb20.131305est.6899@neat.cs.toronto.edu> mgreen@cs.toronto.edu (Marc Green) writes:

>>From: euaneg@eua.ericsson.se (Nils-Erik.Gustafsson)

>>Not to be nit-picking, but it's not quite as bad as 1/6 (17%).
>>The literature seems to agree upon 6-8% (almost all having a
>>green/red-deficiency).
>>
>>Nils-Erik (Gustafsson)
>>ELLEMTEL Telecom Systems Lab

>It's not that simple. It depends on whether you are talking about the
>truly "color blind" (dichromats and monochromats, who are missing a
>pigment or two) or merely color anomalous (trichromats who have all
>three pigments but with reduced amount or shifts in spectral
>sensitivity). The 6-8% number includes the anomalous trichromats, I
>believe. So the real number of color blind people is really quite
>small. To make life more complicated, color-blinds fall into two
>groups (and a rare third group).

>However, it is also a mistake to think that females are never color
>blind. The last I looked, there was a lot of research saying that
>females who carry the gene for color blindness often have mixed
>patches of normal and abnormal retina. They may pass crude color
>screenings (like Ishihara) but still do not have normal color vision.

>In short, attempts to take color blindness into account when creating
>visual displays is probably hopeless. There are too few people and
>they fall into too many different groups, each of which would require
>different modifications.

However, you CAN try and be aware of the common cases and deal with them,
instead of just assuming it's hopeless!  For example, as Nils-Erik Gustafsson
pointed out, most of the color anomalous cases, are red/green deficient (me for
one).  It isn't hard to make sure that when you have a choice of colors for some
things, that you try to avoid putting similar intensity reds, greens, and browns
together, or putting low intensity reds, greens, and browns with black (or even
near very dark blue).  I just saw a marketing program (developed for another
division of Beckman under contract) which "highlighted" the letter of a
command by displaying it in dark red instead of black!  It took me several looks
to figure out the color discrimination required to see which key to press!  (I'm
going to inform the division that had the program done, that they have a
problem!)

In general, it's not too hard to cover the common cases.  Here are a couple of
guidelines that I use for color selection.
- When possible, use contrast instead of color for differentiation.
- Avoid dark reds, greens, and browns.  Lighter shades seem to be easier to
  discriminate (at least for me).
- Try not to use a set of the same (or similar) colors in multiple combinations.
  (We had a programmer here whose color vision was ENTIRELY differential.  I
  once showed him an orange object in front of a beige wall and he correctly
  said it was orange, when I moved it in front of a medium red chair, he said
  that it now appeared GREEN!)


--
Matt Heffron                      heffron@falstaff.css.beckman.com
Beckman Instruments, Inc.         voice: (714) 961-3128
2500 N. Harbor Blvd. MS X-11, Fullerton, CA 92634-3100
Cute saying/disclaimer in development.