[comp.graphics] Compression of multi-bit images

dan@b11.ingr.com (Dan Webb) (06/03/90)

I'm looking for a space-efficient (not necessarily speed-efficient)
algorithm for data compression of multi-bit (color) images.
I would prefer that the technique be fully reversible, but I will
consider anything.

Thanks in advance.

Dan Webb
Intergraph Corp.

spencer@eecs.umich.edu (Spencer W. Thomas) (06/05/90)

Lossless methods:

A popular method is LZW (Lempel-Ziv-Welch) as found in the Unix 'compress'
program.  This doesn't work worth a hoot on scanned images, as they
have too much noise, and LZW relies on patterns.  For 24-bit data, you
need a code size of at least 16 bits.  It probably helps to store your
data RRRR... GGGG... BBBB... instead of RGBRGB (at least, this seems
to be true for scanned images, the opposite may be true for
computer-generated images).

Simple Huffman (adaptive or otherwise (the Unix 'pack' program) may do
well if the pixel intensity (consider R,G,B values separately)
histogram is not flat, even on scanned images.  Arithmetic coding will
do better, but is harder to do.

Typical computer generated images (with no texture mapping, anyway)
compress reasonably well with run-length encoding.

Lossy methods:

The method currently getting all the noise is DCT (discrete cosine
transform) encoding.  The idea is to take a "Fourier transform"
(discrete cosine transform, actually, since the values are all real)
of little (typically 8x8, I think) blocks of the image.  Only the
first few coefficients for each block are saved.  (If only the first
coefficient was saved, you would get back an image with little (8x8)
blocks, each a constant color.)

Potentially more efficient, but definitely more expensive and
complicated, is vector quantization (I suppose DCT is an instance of
this, really).  In this method, you pick a set of representative
"cells" (little blocks of pixels), and then, for each block in the
input image, you find the cell in the set that "most closely" matches
the input cell.  You then output a code representing the
representative cell.  Depending on the size of your "code book", very
large compression ratios can be achieved.  (For example, if you had an
8-bit imput image and 2 4x4 cells, one all black and one all white, in
the code book, then your compression ratio would be 512:1, because 512
(4x4x8) bits in the input give you one bit in the output (either black
or white).  Of course, the reconstructed image would look like s**t.)
(Digression: under quite reasonable assumptions, you can "prove" that
200 bits are sufficient to encode all images.  Maybe I'll give the
details in another message sometime.)  The real trick in VQ is picking
the code book, of course.  I have seen 100:1 compression on scanned
images that ended up looking about as good (or bad, depending on your
point of view) as a VHS tape recorded on the slow speed.

Do I have code for these?  Nope, except run-length encoding.  If
you're on Unix, try using 'compress' on your images.

=Spencer (spencer@eecs.umich.edu)
--
=Spencer (spencer@eecs.umich.edu)

tuna@athena.mit.edu (Kirk 'UhOh' Johnson) (06/05/90)

spencer@eecs.umich.edu (Spencer W. Thomas) writes:

    For example, if you had an 8-bit imput image and 2 4x4 cells,
    one all black and one all white, in the code book, then your
    compression ratio would be 512:1, because 512 (4x4x8) bits in
    the input give you one bit in the output (either black or white).

not to rain on your parade, but when i multiply 4x4x8, i get 128.
perhaps you meant 8x8 cells?
--

-------------------------------------------------------------------------------
kirk johnson                                                    `Eat blue dogs
tuna@masala.lcs.mit.edu                                          and dig life.'

dds@sunb.cc.ic.ac.uk (Diomidis Spinellis) (06/07/90)

In article <SPENCER.90Jun4170821@spline.eecs.umich.edu> spencer@eecs.umich.edu (Spencer W. Thomas) writes:
>Lossless methods:
>A popular method is LZW (Lempel-Ziv-Welch) as found in the Unix 'compress'
>program.  
[...]
>
>Simple Huffman (adaptive or otherwise (the Unix 'pack' program) may do
[...]

Some time ago I implemented a compress / uncompress set of programs for a
face server project that was going on here, based on delta modulation. The
compressed file contains the difference of each byte from the previous
one, coded in a single nibble.  I tried the delta modulation programs, 
compress and pack on a 256 * 256 * 8 gray scale picture of me.

These are the results:

Original			65536
/bin/pack			59660
/usr/ucb/compress		53510
./deltac			40489

I can post the programs if there is demand.

Diomidis
--
Diomidis Spinellis                  Internet:                 dds@cc.ic.ac.uk
Department of Computing             UUCP:                    ...!ukc!iccc!dds
Imperial College                    JANET:                    dds@uk.ac.ic.cc
London SW7 2BZ                      #include "/dev/tty"

ashok@atrp.mit.edu (Ashok C. Popat) (06/07/90)

In article <SPENCER.90Jun4170821@spline.eecs.umich.edu> spencer@eecs.umich.edu (Spencer W. Thomas) writes:
>Lossless methods:
>
>Simple Huffman (adaptive or otherwise (the Unix 'pack' program) may do
>well if the pixel intensity (consider R,G,B values separately)
>histogram is not flat, even on scanned images.  Arithmetic coding will
>do better, but is harder to do.

Among arithmetic codes, those designed for a two-letter source
alphabet have received the most attention (for example, the Q-coder
has been talked about a lot recently).  Although it is possible to
encode an arbitrary source with a binary arithmetic code, it is not
the right thing to do for a bunch of reasons.  The right thing to do
in most cases is to use an arithmetic code that handles a multi-letter
source alphabet directly.  The basic principles of such a code are
simple but the details are a little complicated.  I describe the
details of a suitable arithmetic code in chapt. 3 of my SM thesis,
which I'll make available in (Postscript form) via anonymous ftp
somewhere if enough people are interested.  Let me
(ashok@atrp.mit.edu) know if you're interested.

Ashok Chhabedia Popat   Swiss Federal Institute of Technology,  Lausanne

gbrown@tybalt.caltech.edu (Glenn C. Brown) (06/08/90)

spencer@eecs.umich.edu (Spencer W. Thomas) writes:

>The method currently getting all the noise is DCT (discrete cosine
>transform) encoding.  The idea is to take a "Fourier transform"
>(discrete cosine transform, actually, since the values are all real)
>of little (typically 8x8, I think) blocks of the image.  Only the
>first few coefficients for each block are saved.   

In Scientific American, an short article claimed that some national
committee of something-or-other was trying to set up file standards for
such forms of compression.  They also were able to get reasonable
no-loss compression by Huffman encoding the coefficients.  (I believe
they got about 4:1 with no loss, and could got 11:1 with very little
visible loss of image quality in their 24-bit images.

-Glenn

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (06/11/90)

In article <8099@b11.ingr.com> dan@b11.ingr.com (Dan Webb) writes:
>I'm looking for a space-efficient (not necessarily speed-efficient)
>algorithm for data compression of multi-bit (color) images.
>I would prefer that the technique be fully reversible, but I will
>consider anything.
>
>Thanks in advance.
>
>Dan Webb
>Intergraph Corp.

As later articles mentioned, arithmetic coding is one possibility.
Recent experiments I've performed show compressions as high a 50 to 1
for sparse, "line drawing" type images.  This technique is useful in
general for any dataset which is large, and makes sparse use of the
total range of possible symbols or bit patterns.  Since it accomplishes
its compression at the pixel/symbol level, rather than by run lengths,
or repeated sequences, it is somewhat less sensitive to "speckly" input
images than are other methods.

An article in Communications of the ACM a couple of years back gives
working C code for such a compression algorithm.  The code is written
to be tutorial rather than efficient, so it runs rather slowly (about
30 times slower than LZ compression, for example), but can be very space
efficient given the right sort of data.  Cleaned up (the code does
subroutine calls at the per bit level of the compressed file), the
code could probably run a lot faster, and there are better and fuller
implementations of arithmetic compression around, as another article
notes.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

-- 
in the distance a roasted cave newt screamed in agony -- Andrew Palfreyman

krukar@pprg.unm.edu (Richard Krukar [CHTM]) (06/12/90)

	I worked with various images compression schemes a few years ago
and I found a very disgusting method.  Two steps:
    1) Write up your favorite predictor.
	example: err(row,col) = images(row,col) - images(row, (col-1))
       and run it on your image to obtain an image of prediction errors.
       Don't pack the bits.  Store each number in bytes, shorts, words.

    2) Run compress ( LZW compression ) on the error image.

	This method is fast, lossless and very easy to implement.  It will
also give you a way to measure your own methods performance.  This technique
shamelessly trashed a number of multi-resolution based, tree based, fractal
based, DCT based, ....

	Regards
	Richard Krukar  (krukar@pprg.unm.edu)

hammondp@ingr.com (Paul Hammond) (06/13/90)

In article <8099@b11.ingr.com> dan@b11.ingr.com (Dan Webb) writes:
>I'm looking for a space-efficient (not necessarily speed-efficient)
>algorithm for data compression of multi-bit (color) images.
>I would prefer that the technique be fully reversible, but I will
>consider anything.
>
>Thanks in advance.
>
>Dan Webb
>Intergraph Corp.

in article <1990Jun7.220852.5994@laguna.ccsf.caltech.edu>, gbrown@tybalt.caltech.edu (Glenn C. Brown) says:
> 
> spencer@eecs.umich.edu (Spencer W. Thomas) writes:
> 
>>The method currently getting all the noise is DCT (discrete cosine
>>transform) encoding.  The idea is to take a "Fourier transform"
>>(discrete cosine transform, actually, since the values are all real)
>>of little (typically 8x8, I think) blocks of the image.  Only the
>>first few coefficients for each block are saved.   
> 
> In Scientific American, an short article claimed that some national
> committee of something-or-other was trying to set up file standards for
> such forms of compression.  They also were able to get reasonable
> no-loss compression by Huffman encoding the coefficients.  (I believe
> they got about 4:1 with no loss, and could got 11:1 with very little
> visible loss of image quality in their 24-bit images.

About a year ago, I read something about the DCT scheme in a "journal"
called "DSP Review", dated "winter, 1989".  This was published by the
manufacturer of some DSP chip (AT&T ?).  The author claimed that this
scheme described was able to give 32/1 compression. 

The author of that article was :

 David M Blaker
 Member of Technical Staff
 AT&T European Technology Center

A summary,

 "To summarize, the image is successively subsampled to a one-quarter
  size image, and to a one-sixteenth size image.  Each image is converted
  from  RGB to one luminance and to chrominance components.  Since it
  is well known that the human eye is much less sensitive to high
  spacial-frequency variations in color than in brightness, each
  chrominance component is only one-quarter the size of the luminance
  component.  Each component of the lowest resolution image is broken
  into 8X8 blocks of data that are transformed by the DCT. ..."

This then leads to some (unspecified) "quantizing" of the frequency
coefficients followed by Huffman encoding, difference images and...

Oh well, you'll have to read it yourself.

There were several incompatibilities in the equations in the article
which I attempted to correct.  Experimentation failed to produce the
specified compression ratio.  Perhaps I misunderstood the algorithm.

The article mentioned that an ISO standard for DCT image compression
was being prepared.  A document, 

	ISO/JTC1/SC2/WG8 N800
 	titled, "Initial Draft for Adaptive Discrete Cosine Transform
		 Technique for Still Picture Data Compression Standard"

was also mentioned.  Our "Information Center" (library) was unable to
find any information about this from ISO except two names of members of
the standards committee.

        Donald Thelen, AT&T (office location unknown by me)

        Charles Touchtone (sp?) 
        IBM / Tampa
        (813) 878-3084

Perhaps this standard has proceeded to the point where a draft is actually
available from ISO.  Anyone know ?


-----------------------
Paul L. Hammond
Intergraph Corp.
ingr!b17b!ocelot!paul

wes@maui.lbl.gov (Wes Bethel) (06/13/90)

[stuff deleted]
>  (I believe
>they got about 4:1 with no loss, and could got 11:1 with very little
>visible loss of image quality in their 24-bit images.
>

I'm wondering if anyone has a feel for whether or not the 'loss' is
cumulative or will it cease after the initial 'hit'.  I can envision
image compression losses occuring in the first application of the
cosine transformation, but not occuring in subsequent applications.

Can the same be said of other compression techniques (where loss
can occur) as well?

Enquiring minds....

wes

jgk@demo.COM (Joe Keane) (06/13/90)

In article <27663@pprg.unm.edu> krukar@pprg.unm.edu (Richard Krukar [CHTM])
writes:
>	I worked with various images compression schemes a few years ago
>and I found a very disgusting method.  Two steps:
>    1) Write up your favorite predictor.
>    2) Run compress ( LZW compression ) on the error image.

I don't think this is disgusting at all.  Compress is a useful, general
utility, so there's no point in re-implementing it.  Actually, squeeze is a
bit better, but it's the same idea.

I've had good success with making pre-processors on bitmap files to go before
compress or squeeze.  You make up a new file format, and convert your images
to this format before compression.  The important thing to keep in mind is
that compress uses bytes as input tokens, so you want to make these
meaningful.

One simple algorithm is just to express the bits in hex rather than binary.
This makes the uncompressed file twice as big, but usually reduces the size of
compressed file.  A better way is to have the hex digits represent 4 bits
stacked vertically, and then run these in the normal order.  I wrote filters
to convert to and from this HV (hex vertical) format.  Simple as it is, this
compression method beats overall any other i've seen.

Of course, if you're dealing with gray-scale or color images, you want to do
some arithmetic coding.  If you do, you're better off using characters for
tokens than writing your output in binary.

jrg@otter.hpl.hp.com (John Grinham) (06/13/90)

>In Scientific American, an short article claimed that some national
>committee of something-or-other was trying to set up file standards for
>such forms of compression.

JPEG - Joint Photographic Experts Group - ISO/IEC JTC1/SC2/WG8 
					  CCITT SGVIII
					Status: Draft 5 Jan 1990.
			
	JPEG is a sub-working group of ISO and CCITT. It is in the process
	of developing an international standard for continuous tone (gray-scale
	or colour) still image compression.
 
	DCT based but with DPCM frills etc...
	
	JPEG Chair is Gregory Wallace at DEC. Pennebaker (IBM) ,Mitchell (IBM)
	Ccubed Microsystems (San Jose 408-944-6300) are sampling a chip 
	which they say will be JPEG compatible and will also be capable 
	of doing JPEG type compression on full video at 30 f/s.    

MPEG - Moving Pictures Experts Group. - ISO/IEC JTC1/SC2/WG8 
					Status: Draft March 1990.
	Aim: to develop standards for storage and retrieval of moving
	picture images and sound for digital storage media.

Hope this clears things up ..!

Best regards,
John.

-----------------------------------------------------------------------------
John Grinham.  HP Labs ISC, Filton Road, Stoke Gifford, Bristol BS12 6QZ, UK.
Tel: (0)272 799910	Fax: (0)272 790554 	Telex: 449206
jrg@hplb.uucp		jrg@hplb.hp.co.uk 	jrg@hplb.hpl.hp.com
-----------------------------------------------------------------------------

raveling@isi.edu (Paul Raveling) (06/14/90)

In article <2926@demo.COM>, jgk@demo.COM (Joe Keane) writes:
> In article <27663@pprg.unm.edu> krukar@pprg.unm.edu (Richard Krukar [CHTM])
> writes:
> >	I worked with various images compression schemes a few years ago
> >and I found a very disgusting method.  Two steps:
> >    1) Write up your favorite predictor.
> >    2) Run compress ( LZW compression ) on the error image.
> 
> I don't think this is disgusting at all.  Compress is a useful, general
> utility, so there's no point in re-implementing it.

	I agree, though it seems to me that its implementation should
	run faster.  Someday I'll probably try to reimplement a fast
	version of the same algorithm.

	You might want to look at module fileutil.c in the imglib
	portion of the Img Software Set.  Two functions, open_yafile
	and close_yafile, encapsulate a simple way to do this:

	If the name of the file to be opened ends in .Z, open_yafile
	invokes uncompress or compress and returns a pipe from/to
	that program, using popen(); otherwise it simply opens the file.
	The application can then read or write files as if they're
	simple uncompressed data, even if they're actually compressed.


----------------
Paul Raveling
Raveling@isi.edu

zmact61@doc.ic.ac.uk (D Spinellis) (06/17/90)

In article <1990Jun6.215749.21647@cc.ic.ac.uk> dds@cc.ic.ac.uk (Diomidis Spinellis) writes:
[...]
>Some time ago I implemented a compress / uncompress set of programs for a
[...]
>These are the results:
>
>Original			65536
>/bin/pack			59660
>/usr/ucb/compress		53510
>./deltac			40489

Source has been posted in the newsgroup comp.sources.misc:

	From: dds@cc.ic.ac.uk (Diomidis Spinellis)
	Newsgroups: comp.sources.misc
	Subject: v13i048: Image compression using delta modulation
	Message-ID: <93335@uunet.UU.NET>
	Date: 15 Jun 90 22:59:49 GMT

Diomidis

ashok@atrp.mit.edu (Ashok C. Popat) (07/03/90)

In article <1990Jun7.114905.1714@athena.mit.edu> ashok@atrp.mit.edu (Ashok C. Popat) writes:
>In article <SPENCER.90Jun4170821@spline.eecs.umich.edu> spencer@eecs.umich.edu (Spencer W. Thomas) writes:
>>Lossless methods:
>>
>>Simple Huffman (adaptive or otherwise (the Unix 'pack' program) may do
>>well if the pixel intensity (consider R,G,B values separately)
>>histogram is not flat, even on scanned images.  Arithmetic coding will
>>do better, but is harder to do.
>
>Among arithmetic codes, those designed for a two-letter source
>alphabet have received the most attention (for example, the Q-coder
>has been talked about a lot recently).  Although it is possible to
>encode an arbitrary source with a binary arithmetic code, it is not
>the right thing to do for a bunch of reasons.  The right thing to do
>in most cases is to use an arithmetic code that handles a multi-letter
>source alphabet directly.  The basic principles of such a code are
>simple but the details are a little complicated.  I describe the
>details of a suitable arithmetic code in chapt. 3 of my SM thesis,
>which I'll make available in (Postscript form) via anonymous ftp
>somewhere if enough people are interested.  Let me
>(ashok@atrp.mit.edu) know if you're interested.

The thesis and some C code to do arithmetic coding is now available
for anonymous ftp on cod.nosc.mil (internet number = ?) in /pub under
arith_coding.tar.Z.  Thanks much to herman@marlin.nosc.mil for kindly
putting it there.

Ashok Chhabedia Popat   Swiss Federal Institute of Technology,  Lausanne