bill@hao.UUCP (01/28/87)
I was wondering if anyone on the net knows about a program or method to efficiently compress the size of a image. For example, we have image files that are 1024x1024 x 16 bits and would like to reduce the storage requirements for such files. By the way, does anyone know of any optical disk systems that work under 4.2/4.3BSD? Bill Roberts NCAR/HAO Boulder, CO UUCP: ...!hao!bill
williams@vu-vlsi.UUCP (01/30/87)
In article <505@hao.UCAR.EDU> bill@hao.UCAR.EDU (Bill Roberts) writes: > > I was wondering if anyone on the net knows about a program or method to >efficiently compress the size of a image. For example, we have image files >that are 1024x1024 x 16 bits and would like to reduce the storage requirements In the most recent SIGGRAPH Conference procceedings there is an excellent algorithm called CCC which seems to fill your needs. The only drawbacks to the process is time needed to expand and image and the slight aliasing created by the process.... I think this method has been discussed recently on the net so I won't go into specifics. -Thomas Williams
jdm@gssc.UUCP (01/30/87)
there are a *number* of ways to compress 1Kx1Kx16bit data. the best one requires you to think hard about the kind of data that you are likely to have in these images. certain algorithms work better if the data is fairly homogoneous (i.e. large patches of the same color) and others work much better if the data is in the other extreme. if the data is homogoneous, simple run-length encoding may be enough. using this method, data is stored as a record of colors that appear on a scanline, paired with their length. for example, a scanline that looks like this: r r r r r r r r r w w r r r r r b b b b b b b b b b b b b b where r=red, w=white, b=blue, etc. could be run-length encoded like this: 9r2w5r14b indicating the number of pixels of a color index. other, slightly newer and more sophisticated methods stem from digital signal processing. i am refering to the numerous pulse-code-modulation (PCM) techniques, each of which have their own strengths and weaknesses in the DSP field. for image processing, you might want to consider a form of PCM called PDM, or pulse-delta-modulation, where the delta is the difference from one pixel value to the next this makes sense when you are dealing with shades of related colors AND these shades are close in proximity to each other in your color map. some trickyness is involved here: you can usually specify the delta from one pixel to another in less than 16-bits (which saves you space over storing the pixel values), but you must choose this quantity carefully. if the spectral frequency content of your image is very high, or can be very high, you may easily saturate the delta quantity and the delta would have to be "overloaded", that is, multiple deltas may sometimes equal the delta between two pixels, or suffer a high-frequency loss in the picture, effectively lowering the contrast. the problem with the overloading techinique is that image dumps may not all be the same size, but that is easily fixed with appropriate info blocks, etc, that allow you to find the start of each scanline. in comparing the two techniques here: run-length: good for homogoneous data; prefers simple, steep-edged (high freq. content) images, as shading is non-homogoneous pixels. amount of compression can run from the best possible to the worst possible, depending on the number of colors on the scanline. PDM: very good for shaded images with relatively low spectral frequency. amount of compression is tunable and should be set to make the best tradeoff of typical delta and overloading. overloading reduces the compression slightly and requires a slightly smarted program, but leaves image data intact. since you have 16 bits of resolution, my guess is that you have fairly detailed images, and would benifit most from PDM. do some investigation into your spectral frequency content (how fast does, say, black turn into white and how often) and try to group frequently used colors close together in your color map. if you use an 8-bit modulation index, you compress 50%, optimially, and may never overload. then try 4-bits and see how much more you compress - you may overload a lot more until you reorganize your color map, but you may find you really need 8 bits after all. still, i'll take a 500K file over a 1M file any time. good luck, and if you need more info on PCM, try "The Sony Book of Audio Technology." -- jdm
michael@orcisi.UUCP (02/01/87)
> By the way, does anyone know of any optical disk systems that work under > 4.2/4.3BSD? > > Bill Roberts > NCAR/HAO > Boulder, CO > UUCP: ...!hao!bill The defacto standard interface to most 5-1/4" and 12" optical disk systems is the Small Computer Systems Interface (SCSI). SCSI host adapter cards are common for the PC bus. I also know they exist for VMSbus machines. SCSI is also being used for hard disk interfaces - I think Sun uses a SCSI bus in some of their recent machines. Start by looking for a SCSI card and software driver for your machine or talk to your local optical disk system supplier - they usually know who supports their drive on your configuration. Michael Herman Optical Recording Corporation 141 John Street, Toronto, Ontario, Canada M5V 2E4 UUCP: { cbosgd!utcs ihnp4!utzoo seismo!mnetor }!syntron!orcisi!michael ALSO: mwherman@watcgl.waterloo.edu
brewster@watdcsu.UUCP (02/02/87)
>Reply-To: jdm@gssc.UUCP (John D. Miller) >Organization: Graphic Software Systems, Beaverton Or >other, slightly newer and more sophisticated methods stem from digital >signal processing. i am refering to the numerous pulse-code-modulation (PCM) >techniques, each of which have their own strengths and weaknesses in the DSP >field. >for image processing, you might want to consider a form of PCM called PDM, or >pulse-delta-modulation, where the delta is the difference from one pixel value >to the next this makes sense when you are dealing with shades of related colors >AND these shades are close in proximity to each other in your color map. some >trickyness is involved here: you can usually specify the delta from one pixel >to another in less than 16-bits (which saves you space over storing the pixel >values), but you must choose this quantity carefully. if the spectral >frequency content of your image is very high, or can be very high, you may >easily saturate the delta quantity and the delta would have to be "overloaded", >othat is, multiple deltas may sometimes equal the delta between two pixels, or >suffer a high-frequency loss in the picture, effectively lowering the contrast. no guarantees as to the feasibility of the following, but in conventional dsp it is generally recognized that adaptive delta modulation is far superior to regular delta modulation. regular delta modulation can suffer from two problems : a) delta is too small so output signal can't track rapid changes in input, causing loss of frequency content as mentioned above b) delta is too large so that output signal has jaggies for small changes in input, which is equivalent to the addition of spurious high frequency noise. too avoid this problem adaptive delta modulation allows the size of delta to vary as a function of the input signal. obviously the demodulation problem is more complicated, but for a given compression factor (i.e. 16 bits to 8 bit delta values) the adaptive delta modulation will be able to represent your signal more accurately. i'm sure this approach to image compression has been studied extensively and is written up in the literature, but don't ask me where exactly. the thing to remember with images is that when introducing extraneous noise or artifacts (as any compression/decompression algorithm must do), it is better to introduce noise in areas of high spectral content as the human vision system is least sensitive to noise in these regions. Try not to become a man UUCP : {decvax|ihnp4}!watmath!watdcsu!brewster of success but rather try Else : Dave Brewer, (519) 886-6657 to become a man of value. Albert Einstein