rpeglar@csinc.UUCP (Rob Peglar) (11/11/90)
Lots of people (too numerous) have posted about resolution and such, esp. including bits and pieces of one of my postings. Almost everyone who responded missed the point of the original posting, but that's ok. This is the net, after all. Point was - resolution of 1kx768 is not a big win for *imaging*. Yes, the subject was images, not text, not multiple 80x25 windows, not see-how- many-characters-we-can-cram-onto-one-screen stuff. I fully agree that 1kx768 is necessary for other things. In fact, you're just playing around unless you have at least1600x1200x4 for text and 2D line stuff. Our sister company (Artist) has been around the block a time or two in that camp. As for television, I used it as an analogy to prove that *color* was the important aspect of imaging, not *resolution*. I received a piece of e-mail which compared TV to "a crinkled piece of sh*t" as far as resolution goes. Again, I agree. The original analogy is still valid. (I think) Rob -- Rob Peglar Comtrol Corp. 2675 Patton Rd., St. Paul MN 55113 A Control Systems Company (800) 926-6876 ...uunet!csinc!rpeglar
dave@imax.com (Dave Martindale) (11/15/90)
In article <240@csinc.UUCP> rpeglar@csinc.UUCP (Rob Peglar) writes: > >Point was - resolution of 1kx768 is not a big win for *imaging*. Yes, >the subject was images, not text, not multiple 80x25 windows, not see-how- >many-characters-we-can-cram-onto-one-screen stuff. >As for television, I used it as an analogy to prove that *color* was the >important aspect of imaging, not *resolution*. I received a piece of e-mail >which compared TV to "a crinkled piece of sh*t" as far as resolution goes. >Again, I agree. The original analogy is still valid. (I think) What exactly do you mean by "imaging"? I think it means looking at photographic-quality images. For that, you need decent resolution in *both* colour and space. Television does manage to encode three colours with pretty good colour resolution, over large areas. You need a 24-bit frame buffer to match it in the digital world if you're going to store the information as RGB. But television's spatial resoultion is simply awful. It's capable of resolving only about 50 "pixels" horizontally for certain combinations of alternating colours. Television works amazingly well, considering the tiny bandwidth it has to work with, when viewed from a distance of 5 or 10 times the picture height, like it was intended to be. But I sit a lot closer to my workstation screen than that - 1-2 picture heights away. And at that distance, television is simply awful for "imaging", unless I specifically want to see what my images will look like on television. I suspect that what Rob really means is that, given you can only afford 1 megabyte of video RAM, you're better off arranging it as 640x480x24 than as 1024x768x8 if you're doing imaging - and he's right. But 640x480x24 is, in fact, much better than television. And if you can afford more video RAM, much higher resolution is useful, provided you retain 24 bits for colour. Extra pixels stop being useful when you reach the point where a single pixel is about one arc minute in size when seen from the viewer's eye. For a CRT viewed from one screen width away, that's about 3500 pixels horizontal resolution. For a viewing distance of two screen widths, the limit is about 1800.
davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (11/17/90)
I'm not convinced that you need 24 bits of color for the memory, either. Systems like the VGA which have a large palette and a limited number of selections work very well. If you look at the output of a 24 bit color scanner scanning quality photographs, you rarely find an image which doesn't map into 256 colors nicely. Very rarely. Therefore I conclude that for human viewing of "real world" images (ie. things which physically exist) you can do 8/24 bit mapping with good results. Computer generated images can have more that 256 colors, so I am not making that argument about anything but real world data. I would not be surprised if many people can't tell 256 color mapping unless they see it side by side, but information might be lost. Technical note: the VGA does 8/18 bit mapping, 6 bits each of RGB, and that is not quite enough, although it can provide so reasonable images in hires modes. -- bill davidsen (davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen) VMS is a text-only adventure game. If you win you can use unix.
msp33327@uxa.cso.uiuc.edu (Michael S. Pereckas) (11/17/90)
In <2928@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes: > I'm not convinced that you need 24 bits of color for the memory, >either. Systems like the VGA which have a large palette and a limited >number of selections work very well. If you look at the output of a 24 >bit color scanner scanning quality photographs, you rarely find an image >which doesn't map into 256 colors nicely. Very rarely. Note that this gets to be a mess in a hurry if you want to put two different images on the screen at once (perhaps in separate windows). Pallets can be a problem in multitasking, windowing environments. You're right though---super VGA images (640x400 -- 800x600 pixels in 256 colors) look great. -- Michael Pereckas * InterNet: m-pereckas@uiuc.edu * just another student... (CI$: 72311,3246) *Jargon Dept.: Decoupled Architecture--sounds like the aftermath of a tornado*
dave@imax.com (Dave Martindale) (11/20/90)
In article <2928@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes: > > I'm not convinced that you need 24 bits of color for the memory, >either. Systems like the VGA which have a large palette and a limited >number of selections work very well. If you look at the output of a 24 >bit color scanner scanning quality photographs, you rarely find an image >which doesn't map into 256 colors nicely. Very rarely. I guess it depends on what you're doing. This certainly isn't true when dealing with "photographic-quality" images. When digitizing transparencies or negatives, 24 bits is clearly not enough - I can show you images with ugly banding artifacts due to quantization in the dark areas of the image. Even digitizing at 36 bits (12 bits/component) and then storing 8 bits of the logarithm of intensity is not enough in some circumstances. > Therefore I conclude that for human viewing of "real world" images >(ie. things which physically exist) you can do 8/24 bit mapping with >good results. I suspect we have different definitions of "acceptable". Mine is "you can't see any artifacts due to the transfer from film to digital". Yours may be "it looks pretty good". This may be adequate for most people dealing with images, but it certainly isn't good enough for everyone. I currently use two monitors - 1600x1200x1 monochrome for editing, and 1024x768x30 colour for image display. This seems like a pretty good compromise for the moment - high resolution and fast drawing for text, while colour images appear much more slowly but with excellent quality. Dave
davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (11/20/90)
In article <1990Nov19.195042.19240@imax.com> dave@imax.com (Dave Martindale) writes: | I said: | > Therefore I conclude that for human viewing of "real world" images | >(ie. things which physically exist) you can do 8/24 bit mapping with | >good results. | | I suspect we have different definitions of "acceptable". Mine is | "you can't see any artifacts due to the transfer from film to digital". | Yours may be "it looks pretty good". This may be adequate for | most people dealing with images, but it certainly isn't good enough for | everyone. I think it's more a definition of "real world images." It sounds like you are working with electronic images, rather than things you can actually photograph. I'm talking about the realm of optical viewing of physical objects, not any of the other image forming technologies. These images don't tend to have the wide smooth sweeps of slowly changing color which show artifact. Not that you can't find some images somewhere which produce this effect, but that typical images do not lose information. Obviously you can say there is a world of information with a 36 bit scanner, and I'm missing it all with my 24 bit scanner, so all I can say is that what I see works with 256 colors. Redirected to comp.graphics... -- bill davidsen (davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen) VMS is a text-only adventure game. If you win you can use unix.
phil@brahms.amd.com (Phil Ngai) (11/21/90)
In article <2928@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes: | I'm not convinced that you need 24 bits of color for the memory, |either. Systems like the VGA which have a large palette and a limited |number of selections work very well. If you look at the output of a 24 |bit color scanner scanning quality photographs, you rarely find an image |which doesn't map into 256 colors nicely. Very rarely. I'm afraid I have to disagree based on the VGA 256 color images I've seen. I've seen some impressive GIF images that tended to be all variations of one color. I've also seen images that bordered on showing banding. And I've seen images scanned by amateurs that clearly looked amateurish so I assume there is considerable art associated with palette selection when you are limited to 256 colors. -- KristallNacht: why every Jew should own an assault rifle.
jgk@osc.COM (Joe Keane) (11/21/90)
In <2928@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes: > I'm not convinced that you need 24 bits of color for the memory, >either. Systems like the VGA which have a large palette and a limited >number of selections work very well. If you look at the output of a 24 >bit color scanner scanning quality photographs, you rarely find an image >which doesn't map into 256 colors nicely. Very rarely. In article <1990Nov16.190248.20437@ux1.cso.uiuc.edu> msp33327@uxa.cso.uiuc.edu (Michael S. Pereckas) writes: >Note that this gets to be a mess in a hurry if you want to put two >different images on the screen at once (perhaps in separate windows). >Pallets can be a problem in multitasking, windowing environments. I agree, pallettes are a big pain. A true-color display is a lot easier to deal with, since there's no global interaction between pixels. Or you can agree once and for all on a single pallette, which all windows on the screen have to use. Let's assume we have graphics software that understands the concept of dithering, not like some other software we could mention. So when you say to fill a rectangle with a given color, it actually fills it with some neat pattern alternating between a few colors, such that the overall color is very close to what you asked for. Assume that this software is actually competent at what it does, so that there are no nasty artifacts and the boundaries between colors work out right. Now also assume we have a display with good resolution, so the individual pixels are quite small. So then, how many colors do we need? In other words, what is the largest color difference between neighboring pixels such that it won't be obvious that the pixels are different colors? It seems to me that it's somewhere around 4 bits per component, for a total of 4K colors. Maybe this is a little low, though. What do you all think?
lalibert@bcarh188.bnr.ca (Luc Laliberte) (11/21/90)
Back before I upgraded from EGA to VGA I wrote a GIF viewer in C to display large (up to 800x600) GIFs in dithered form. (Other programs had restricted themselves to 320 pixels wide.) I got exceptionally (near VGA) quality with a two pass algorithm. The process goes like this: reduce 8 bits of color to 13 2x2 bit pattern blocks. This reduces the number of colors from 16million to 64 (2 bits per color). Count the number of times each color is used, and then perform a priority best fit to reduce that to 16 colors. Then redecode the GIF, substituting the best fit 16 colors for the 64 possible. The program FastGIF does a similar operation, CSHOW uses a single pass method that is vastly inferior. What does this mean? It is possible to represent 16million colors adequetly with only 16 colors (out of a possible 64). Therefore, it is conceivable to display 24bit color only using 6bit color and dithering. (The best fit portion is removed and it can be done in a single pass.) However, your resolution drops by a factor of 2 in both directions. Consider that the best resolution of common displays is 1024x768, this yeilds a dithered display of 640x384, far below common 800x600 8bit from 18bit displays. This signature intentionally left blank
jgk@osc.COM (Joe Keane) (11/22/90)
In article <2928@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes: > I'm not convinced that you need 24 bits of color for the memory, >either. Systems like the VGA which have a large palette and a limited >number of selections work very well. If you look at the output of a 24 >bit color scanner scanning quality photographs, you rarely find an image >which doesn't map into 256 colors nicely. Very rarely. In article <1990Nov19.195042.19240@imax.com> dave@imax.com (Dave Martindale) writes: >I guess it depends on what you're doing. This certainly isn't true when >dealing with "photographic-quality" images. When digitizing transparencies >or negatives, 24 bits is clearly not enough - I can show you images with >ugly banding artifacts due to quantization in the dark areas of the image. > >Even digitizing at 36 bits (12 bits/component) and then storing 8 bits >of the logarithm of intensity is not enough in some circumstances. This is true, but i think you're talking about a slightly different problem. Quantization of display colors is a small problem, since you can use dithering to get what you want. Quantization in scanning is a more serious problem, since once it's there you can't get rid of it. One way to avoid the problem is to add some noise to the sample values before they are quantized. I'm not happy about adding noise to my data, but it does work. >I suspect we have different definitions of "acceptable". Mine is >"you can't see any artifacts due to the transfer from film to digital". >Yours may be "it looks pretty good". This may be adequate for >most people dealing with images, but it certainly isn't good enough for >everyone. I'm sure most of you have been in stores and seen the various demo images they use to show off computers. The quality is so good they look almost like photographs. But if you look at the display specifications, they may use only 256 colors for the whole image. How do they do it? They certainly don't just take a 256-color scanner and copy the output pixel by pixel. They scan the picture at high resolution, and then spend lots of CPU on a good dithering algorithm. They also carefully select the pallette to minimize the errors in dithering. >I currently use two monitors - 1600x1200x1 monochrome for editing, and >1024x768x30 colour for image display. This seems like a pretty good >compromise for the moment - high resolution and fast drawing for text, >while colour images appear much more slowly but with excellent quality. This is a good combination. I think that for a given technology, monochrome monitors will always be sharper than color monitors. Too bad you can't have one monitor which switches between the two.
dave@imax.com (Dave Martindale) (11/23/90)
In article <2937@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes: > > I think it's more a definition of "real world images." It sounds like >you are working with electronic images, rather than things you can >actually photograph. I'm talking about the realm of optical viewing of >physical objects, not any of the other image forming technologies. > > These images don't tend to have the wide smooth sweeps of slowly >changing color which show artifact. Not that you can't find some images >somewhere which produce this effect, but that typical images do not lose >information. Obviously you can say there is a world of information with >a 36 bit scanner, and I'm missing it all with my 24 bit scanner, so all >I can say is that what I see works with 256 colors. No, I'm working primarily with digitized film images. It is true that the inherent scanner noise, plus film grain noise, make these images less likely to show banding effects than images generated entirely by calculation. But there are still images that show banding when quantized to 8 bits. In addition, your eye is much better at detecting moving edges than stationary ones. An image that shows almost-undetectable banding when you examine a single still frame may have very obvious banding when motion causes the bands to move. I believe that the real difference between us is that you're saying that a 24-bit scanner and 8-bit display gives you screen displays that look fine for "typical" stationary images - that if 80% of the images you try show no problems, you're happy. While I'm saying that if I need a system that works with 100% of the images that might pass through it, including moving ones, 24 bits definitely isn't enough. The two statements aren't incompatible. I just wished to point out that some of us require our images to look right 100% of the time, not 80%, so 8 bits or 24 bits is simply not always sufficient. (I left this in comp.arch; comp.graphics people have already heard this stuff before).
henry@zoo.toronto.edu (Henry Spencer) (11/24/90)
In article <4027@osc.COM> jgk@osc.COM (Joe Keane) writes: >Quantization of display colors is a small problem, since you can use dithering >to get what you want... How well does dithering work for animation? (Serious question, I'm not up on the fine points of this.) Many display artifacts that don't look too serious in a still image become glaringly obvious when they change from frame to frame. I'd also note that for some classes of images, like scientific visualization, it is not acceptable to mess with the pixels to make it look better. -- "I'm not sure it's possible | Henry Spencer at U of Toronto Zoology to explain how X works." | henry@zoo.toronto.edu utzoo!henry
alan@cogswell.Jpl.Nasa.Gov (Alan S. Mazer) (11/24/90)
In article <1990Nov23.182147.26688@zoo.toronto.edu>, henry@zoo.toronto.edu (Henry Spencer) writes: >In article <4027@osc.COM> jgk@osc.COM (Joe Keane) writes: >>Quantization of display colors is a small problem, since you can use dithering >>to get what you want... >I'd also note that for some classes of images, like scientific visualization, >it is not acceptable to mess with the pixels to make it look better. Absolutely, which is why we don't use dithering around here. It's bad enough having to do the quantization, but if we dithered, individual pixels would be almost useless. Dithering is fine if you want to make pretty pictures, but if you are actually using the picture for something analytical you lose a lot of information. -- -- Alan # My aptitude test in high school suggested that ..!ames!elroy!alan # I should become a forest ranger. Take my alan@elroy.jpl.nasa.gov # opinions on computers with a grain of salt.
lalibert@bcarh188.bnr.ca (Luc Laliberte) (11/26/90)
I spent the weekend converting a 24bit image to GIF format, without dithering. Instead, I took the most significant 5 bits of each color and ran it through a 2 pass color compression algorithm to reduce the number of colors required to 256. The process is expensive in CPU time but the results are worth it. The final picture looks like it has been anti-aliased and lacks any banding. Note, I've never seen the picture in 24bit format, but the final result is comparable to other high quality GIFs I have seen. Eric
jgk@osc.COM (Joe Keane) (11/29/90)
In article <4027@osc.COM> jgk@osc.COM (Joe Keane) writes: >Quantization of display colors is a small problem, since you can use dithering >to get what you want... In article <1990Nov23.182147.26688@zoo.toronto.edu> henry@zoo.toronto.edu (Henry Spencer) writes: >How well does dithering work for animation? (Serious question, I'm not up >on the fine points of this.) Many display artifacts that don't look too >serious in a still image become glaringly obvious when they change from >frame to frame. It gets more complicated, but i figure if you're doing animation you're used to that. Dithering each individual frame doesn't work very well, since you get all sorts of annoying moving patterns. Basically you want to dither in three dimensions, although they're not weighted the same. As you can imagine, the error diffusion algorithms get pretty complex. But if it's done right, the errors in a given frame get compensated by those in surrounding frames. >"I'm not sure it's possible | Henry Spencer at U of Toronto Zoology >to explain how X works." | henry@zoo.toronto.edu utzoo!henry Heh heh, i don't know about that, but i'm not sure it's possible to explain _why_ X works the way it does. >I'd also note that for some classes of images, like scientific visualization, >it is not acceptable to mess with the pixels to make it look better. In article <1990Nov23.204238.12597@elroy.jpl.nasa.gov> alan@cogswell.Jpl.Nasa.Gov (Alan S. Mazer) writes: >Absolutely, which is why we don't use dithering around here. It's bad enough >having to do the quantization, but if we dithered, individual pixels would be >almost useless. Dithering is fine if you want to make pretty pictures, but >if you are actually using the picture for something analytical you lose a lot >of information. Maybe i'm missing something, but i don't see why visualization is that much different from pretty pictures. Specifically, i'd say something is seriously wrong if a single pixel is really that critical. That's what zooming in is for, right? Personally i can't spend all day looking at individual pixels. And it's only going to get worse as displays get higher resolution. Here is an example: i had a simple program to view grey-scale pictures on a 1-bit machine. This isn't really the same thing we're talking about, since dithering black vs. white is a lot more obvious than dithering small variations in color. Anyway, initially it would map one pixel in the picture to one pixel on the screen. Of course you lose individual pixels unless they have high contrast. But then i added zoom-in features, with some fairly simple interpolation. The results look quite good; you can see every last detail in the original picture at about 8x expansion. It was a bit on the slow side, but that's what i get for using a lowly Sun-3. Let me propose a simple test. Suppose we have a choice between say a 1000x1000 pixel display with 48 bits of true color, and a 2000x2000 pixel display with a 4096-color palette. They're the same number of bits, but which is better? I'd take the high-resolution one in a second, assuming good software like i said before. You can do the large expanses of slowly changing colors almost as well as the low-resolution one. But then the sharp edges have four times the resolution. My main point is that you can always give up resolution for more accurate colors, but there's no way to get better resolution than what you have. Don't get me wrong, dithering is not a trivial task, and sometimes i think people get it wrong more often than right. Taking a display from pretty good to photograph quality takes good software and often a good amount of CPU. But it can be done.