andrew@ee.su.oz.au (Andrew Ho) (11/19/90)
Hi, I am trying to convert some full color images (RGB) into gray level images (because I hope to get some non-colorful laser printouts after doing the convertion). Are there any algorithms/lookup tables to do the "RGB to gray level" matching ? Thanks ! andrew@ee.su.au.oz
twdorr01@ulkyvx.BITNET (ThomasD) (11/20/90)
In some article andrew@ee.su.oz.au (Andrew Ho) writes: > I am trying to convert some full color images (RGB) into >gray level images (because I hope to get some non-colorful >laser printouts after doing the convertion). > > Are there any algorithms/lookup tables to do the "RGB to >gray level" matching ? I'm sure there are better algorithms than the one I am going to suggest, but this is the way I've done it and the results are as good as could be expected I suppose. Besides, I've been very anxious to post to this news- group and this seems a good time. I write almost exclusively in assembly, but I'll try to explain in a more intelligible language. I think the best way to go about this would be to take an example. Let's assume your system is set up like mine (for convenience) and each color is defined by an RGB value of 12 bits (4 bits per component). If you take each pixel, add up the components of that pixel, divide by three and round up, you'll have a value that can be mapped back into the Red, Green, and Blue components. I'm sure you realize that if the Red, Green, and Blue components of a pixel take on a common value, then the color of that pixel will be a greyscale. Or, (you're trying to print the RGB image to a laser printer, right?), what I did in my laser routine was to use the common value as an index (in conjunction with the X and Y coordinate of the pixel) into a look-up table of bit map pixels (on and off) that matched the greyscale value as closely as possible. The results were very nice, but I suspect there's a better way to do it. Anybody know? ThomasD
zap@lysator.liu.se (Zap Andersson) (11/20/90)
twdorr01@ulkyvx.BITNET (ThomasD) writes: >In some article andrew@ee.su.oz.au (Andrew Ho) writes: >> I am trying to convert some full color images (RGB) into >>gray level images (because I hope to get some non-colorful >>laser printouts after doing the convertion). >> >> Are there any algorithms/lookup tables to do the "RGB to >>gray level" matching ? If I understood your question correctly: Gray = 0.39 * Red + 0.50 * Green + 0.11 * Blue... ...says the NTSC spec. I _*NEVER*_ got this to look goot, coz to my mind the blue turns out WAY to dark. Perhaps my eyes aren't NTSC standard, right? I use a beefed 0.30 Red + 0.50 Green + 0.20 Blue coz i like it better... >Or, (you're trying to print the RGB image to a laser printer, right?), >what I did in my laser routine was to use the common value as an index (in >conjunction with the X and Y coordinate of the pixel) into a look-up table >of bit map pixels (on and off) that matched the greyscale value as closely >as possible. The results were very nice, but I suspect there's a better way >to do it. Anybody know? >ThomasD Converting RGB to Gray is easy, as in the above... converting GRAY to halftone (i.e. black OR white dots like from a laser) ain't so easy... Dithering of sorts can be used, whoa, this is a whole field of science here.. You could use patterned dithering, either rastered like in a newspaper, where small dots grow larger as they grow darker, or you could use a dithermatrix where a gray pattern is 50% 75% 25% 01010101 01010101 00000000 10101010 11111111 10101010 01010101 01010101 00000000 10101010 11111111 10101010 But a pattern like this often introduces incredibly ugly stripes and moire effects.... Well, like I said, it's a science. My best shot at it is to convert the durn thing to postscript, and thump it in to the printer. That'll leave the problem to the printer (or at least the printer manufacturer ;-) /Z -- * * * * * * * * * * * * * * * * * * My signature is smaller than * * yours! - zap@lysator.liu.se * * * * * * * * * * * * * * * * * * -- * * * * * * * * * * * * * * * * * * My signature is smaller than * * yours! - zap@lysator.liu.se * * * * * * * * * * * * * * * * * *
marks@galadriel.bt.co.uk (Mark Shackleton) (11/20/90)
From article <1990Nov19.050822.12771@metro.ucc.su.OZ.AU>, by andrew@ee.su.oz.au (Andrew Ho): > I am trying to convert some full color images (RGB) into > gray level images (because I hope to get some non-colorful > laser printouts after doing the convertion). > > Are there any algorithms/lookup tables to do the "RGB to > gray level" matching ? > Use: Y = .299 R + .587 G + .114 B This will give the luminance (greylevel) 'Y' for the given RGB values. Regards, Mark Shackleton
hutch@fps.com (Jim Hutchison) (11/21/90)
In some article andrew@ee.su.oz.au (Andrew Ho) writes: > I am trying to convert some full color images (RGB) into >gray level images (because I hope to get some non-colorful >laser printouts after doing the convertion). > > Are there any algorithms/lookup tables to do the "RGB to >gray level" matching ? Assuming that you are not asking for Intensity = .59 G + .11 B + .30 R from the "Frequently Asked Questions" posting, I'll move directly on to another interpretation of your question. That being, why did the whites get so washed out and the shadow detail get shrouded in darkness? The process for correcting for anomalies in the grey ramp is a seriously difficult problem to do "right". Ofcourse, sleazy cheats abound, and produce results which may be good enough for you. A description of the implementation of the easiest of these sleazy hacks is as follows: An easy way to do the luminance calculation was by adding components from 3 tables, one each R, G, and B. Start with a 4th table of size 256 with entry 0..255 containing 0..255. Snap all the middle grey numbers to the range in which your printer performs well (e.g. for 20..230. Take entrys 4..250 and multiply them by 209 and divide by 255 and add 20). This snaps the range while making sure that blacks are black and whites are white. Then you just index the table with your luminance value and get a "new" and "improved" value. Lcorrected = Lum[ Rgrey[red] + Ggrey[green] + Bgrey[blue]] Clearly you could get out sampling equipment and tune the output by using different tables and/or dithering algorithms. Certain folks at Xerox and elsewhere have pursued this problem that way to good (if exhausting) results. -- - Jim Hutchison {dcdwest,ucbvax}!ucsd!fps!hutch Disclaimer: I am not an official spokesman for FPS computing
pierce@radius.com (Pierce T. Wetter III) (11/22/90)
>Use: Y = .299 R + .587 G + .114 B
That will work for the NTSC phosphor set. If you want to be completely
anal about it you need to recalculate the formula for each phosphor set.
Or at least use the formulas for the SMPTE phosphor set.
Pierce
--
My postings are my opinions, and my opinions are my own not that of my employer.
You can get me at radius!pierce@apple.com.
(Wha'ja want? Some cute signature file? Hah! I have real work to do.
jb@falstaff.mae.cwru.edu (Jim Berilla) (11/23/90)
In article <1375@radius.com> pierce@radius.com (Pierce T. Wetter III) writes: >>Use: Y = .299 R + .587 G + .114 B > > That will work for the NTSC phosphor set. If you want to be completely >anal about it you need to recalculate the formula for each phosphor set. > > Or at least use the formulas for the SMPTE phosphor set. > >Pierce Something's not quite right here. I have no doubt that the above famous formula is correct, but aren't the scale factors built into the monitor? I mean, if you apply zero volts to all three inputs of an RGB monitor, then you get black. (Hopefully no disagreement about that.) But if you put in 1 volt (or .707 or whatever) on each of the three inputs, then you get white, not a sick cyan-ish white that the above formula would imply. Or am I missing something? When doing greyscale work on a PC with a VGA card (no flames please, the sparc's on order) I use the following algorithm to set the pallet: (Note that the VGA has 256 colors, but only 6 bits per dac) 0 <= icolor <= 252 red(icolor) = icolor/4 green(icolor) = icolor/4 + iand(icolor,2)/2 blue(icolor) = icolor/4 + iand(icolor,1) red(253) = green(253) = blue(253) = 63 ! the neat formula red(254) = green(254) = blue(254) = 63 ! screws up for red(255) = green(255) = blue(255) = 63 ! these values. This gives a pallet table that looks like this: color r g b color r g b color r g b color r g b 0 0 0 0 8 2 2 2 16 4 4 4 248 62 62 62 1 0 0 1 9 2 2 3 17 4 4 5 249 62 62 63 2 0 1 0 10 2 3 2 18 4 5 4 250 62 63 62 3 0 1 1 11 2 3 3 19 4 5 5 ... 251 62 63 63 4 1 1 1 12 3 3 3 20 5 5 5 252 63 63 63 5 1 1 2 13 3 3 4 21 5 5 6 253 63 63 63 6 1 2 1 14 3 4 3 22 5 6 5 254 63 63 63 7 1 2 2 15 3 4 4 23 5 6 6 255 63 63 63 Adding in a bit of blue and/or green smooths out the steps that would occur in a 6-bit (64 level) greyscale. It seems to work well for me, with no strange color tinges in the picture. -- Jim Berilla / jb@falstaff.cwru.edu / 216-368-6776 "My opinions are my own, except on Wednesday mornings at 9 AM, when my opinions are those of my boss."
hue@island.uu.net (Colonel Panic) (11/29/90)
In article <406@lysator.liu.se>, zap@lysator.liu.se (Zap Andersson) writes: > > Gray = 0.39 * Red + 0.50 * Green + 0.11 * Blue... > > ...says the NTSC spec. I _*NEVER*_ got this to look goot, coz to my mind the > blue turns out WAY to dark. Perhaps my eyes aren't NTSC standard, right? I use > a beefed 0.30 Red + 0.50 Green + 0.20 Blue coz i like it better... The reason it didn't look right is because you probably just took your R, G, and B pixel values and plugged them into the formula. When you calculate Y, it's a weighted sum of the intensities of R, G, and B. However, your pixel value probably does not represent a linearly encoded intensity, but something more like intensity^(1/gamma), where gamma is some number usually between 2.2 and 2.7. So the formula should really be something like: Y = (.299 R^gamma + .587 G^gamma + .114 B^gamma)^(1/gamma) Obviously, when your pixel value is linearly encoded intensity, gamma is 1 and it simplifies to the formula everyone uses. The results you describe ("blue turns out WAY to [sic] dark") are exactly what you get when treat pixel values as linearly encoded intensity, but in reality they are exponentially encoded (to the 1/gamma power). I don't think this is a bad thing, as it allows you to get the most out of 8-bit/component pixel data, but you do need to be aware of what you're doing. -- Only in Marin: Sign next to Sony waterproof cordless phone in department store calls it "Marin Hot Tub model" Jonathan Hue Island Graphics Corp. uunet!island!hue hue@island.com