[comp.compression] JPEG obsolete

csu@alembic.acs.com (Dave Mack) (06/01/91)

In article <1991May30.203723.21550@ns.network.com> logajan@ns.network.com (John Logajan) writes:
>I'm reporting this without understanding it, but, Don
>Lancaster claims that JPEG (which he says uses a 
>DCT -- discrete cosine transform ) is obsolete because
>the new "wavelet" methods of compression make it slow,
>much less compressive, and less fidelity in comparison.

I can only base my comments on very limited experimentation with
the Yale wavelet package, of which my understanding is at best poor, but
it appears that wavelet image compression is drastically slower than
JPEG, the fidelity is dreadful, and the compression is not nearly
as good. 

On a Sun-4/110: (times are for compression only)

c.pgm	266014 bytes	256x256 grayscale, 120/256 grayshades, ASCII format
c.gif	 46180 bytes	(ppmtogif - 6.5 seconds real time)
c.jpeg    7249 bytes	(ppmtojpeg - 9 seconds real time)
c.c2p    21859 bytes	(c2p/std filters/20% threshhold ~6 minutes real time)
c.c2a  1429977 bytes	(c2a/std filters/20% threshhold ~12 minutes realtime -
			decompression failed with a segment fault)

c2p - Yale compress with 2-dimensional periodic wavelet packets
c2a - Yale compress with 2-dimensional aperiodic wavelet packets
The times associated with these compressors are approximate because
they require interactive input (unless you know how to build their
parameter files - I don't.)

The decompressed c2p image is recognizeable, but looks smeared, as if
seen through a camera lens covered with water droplets.

We can safely assume that the output of c2a is incorrect for
some reason (just as well!) given the core dump. This is probably
my fault - perhaps selecting a margin-width of zero is bad.

The decompressed JPEG image is visually indistinguishable from the
input image. Decompression took 2.7 seconds. The fact that this
was a grayscale image aids JPEG, which tends to retain image
structure while distorting color values.

None of this should be taken as condemnation of either wavelet
compression in general or the Yale package in particular. The
Yale package has quite a few adjustable parameters, none of which
are clearly documented in the HELP file, which is the only documentation
I have for it. It is possible that I managed to pick really bad
parameter values (likely even.) I would be delighted to repeat
this experiment with a more intelligent choice of filter, filter
range, threshhold, et al, if anyone has any recommendations.

The Yale wavelet package is available for anonymous ftp from
yale.edu. I believe that the directory is /wavelet/binaries;
you may have to hunt around. It contains the binaries for Sun 3/4
and a couple of other architectures. Source code may or may not be 
available if you sign a nondisclosure agreement with Yale - see the
README file in /wavelet.

The PD JPEG implementation is still under development and will
probably be ready for release sometime this summer. Contact
jpeg-request@think.com if you are interested in being added to
the PD JPEG mailing list.

-- 
Dave Mack

ulrich@cs.tu-berlin.de (Ulrich Wittenberg) (06/03/91)

Hello.

I tried to obtain the wavelet package from yale.edu, but their ftp
server didn't like me :-(

Well, I'm very interested in this kind of stuff, so would the people
at Yale please tell their ftp server to like me or could some kind
soul provide another ftp address?

Greeting from Berlin    Ulrich

P.S.: I think we should agree on a set of standard images to test the
various compression schemes.

Ulrich Wittenberg, FG Computer Graphics, Institut fuer technische Informatik
Fachbereich 20 (Informatik), TU Berlin, Franklinstrasse 28/29, 
1000 Berlin 10, Germany, Europe, Terra
--
Ulrich Wittenberg, FG Computer Graphics, Institut fuer technische Informatik
Fachbereich 20 (Informatik), TU Berlin, Franklinstrasse 28/29, 
1000 Berlin 10, Germany, Europe, Terra

campbell@sol.cs.wmich.edu (Paul Campbell) (06/04/91)

Wavelet compression is a funny beast..the amount of compression is related to
the quality in a linear way from what I understand. It will look like it was
run through a hard rain if you set the compression too grainy, too, which
almost has merits on it's own as a really neat image transformation.

eero@media-lab.media.mit.edu (Eero Simoncelli) (06/15/91)

For those of you interested in Wavelet compression schemes, I have a
very fast (on the decompression end) pyramid wavelet coder.  It works
reasonably well (informal comparisons seemed noticably better than
JPEG), but there is certainly room for improvement.  It is available
via anonymous ftp -- here is the README file:


--------------------------------------------------------------------- 
---		 EPIC (Efficient Pyramid Image Coder)             ---
---	 Designed by Eero P. Simoncelli and Edward H. Adelson     ---
---		    Written by Eero P. Simoncelli                 ---
---  Developed at the Vision Science Group, The Media Laboratory  ---
---	Copyright 1989, Massachusetts Institute of Technology     ---
---			 All rights reserved.                     ---
---------------------------------------------------------------------

Permission to use, copy, or modify this software and its documentation
for educational and research purposes only and without fee is hereby
granted, provided that this copyright notice appear on all copies and
supporting documentation.  For any other uses of this software, in
original or modified form, including but not limited to distribution
in whole or in part, specific prior permission must be obtained from
M.I.T. and the authors.  These programs shall not be used, rewritten,
or adapted as the basis of a commercial software or hardware product
without first obtaining appropriate licenses from M.I.T.  M.I.T. makes
no representations about the suitability of this software for any
purpose.  It is provided "as is" without express or implied warranty.

---------------------------------------------------------------------

EPIC (Efficient Pyramid Image Coder) is an experimental image data
compression utility written in the C programming language.  The
compression algorithms are based on a hierarchical subband
decomposition using asymmetric filters (described in the references
given below) and a combined run-length/Huffman entropy coder.  The
filters have been designed to allow extremely fast decoding on
conventional (ie, non-floating point) hardware, at the expense of
slower encoding.

We are making this code available to interested researchers who wish
to experiment with a subband pyramid coder.  We have attempted to
optimize the speed of pyramid reconstruction, but the code has not
been otherwise optimized, either for speed or compression efficiency.
In particular, the pyramid construction process is unnecessarily slow,
quantization binsizes are chosen to be the same for each subband, and
we have used a very simple scalar entropy coding scheme to compress
the quantized subbands.  Although this coding technique provides good
coding performance, a more sophisticated coding scheme (such as vector
quantization) using the same pyramid decomposition could result in
substantial coding gains.  EPIC is currently limited to 8-bit
monochrome square images, and does not explicitly provide a
progressive transmission capability.

Epic is available via anonymous ftp from whitechapel.media.mit.edu (IP
number 18.85.0.125) in the file pub/epic.tar.Z.  Comments,
suggestions, or questions should be sent to:

  Eero P. Simoncelli
  Vision Science Group
  MIT Media Laboratory, E15-385
  Cambridge, MA  02139

  Phone:  (617) 253-3891,    E-mail: eero@media-lab.media.mit.edu

References:

Edward H. Adelson, Eero P. Simoncelli and Rajesh Hingorani.  Orthogonal
   pyramid transforms for image coding.  In Proceedings of SPIE,
   October 1987, Volume 845.

Eero P. Simoncelli.  Orthogonal Sub-band Image Transforms.  Master's Thesis,
   EECS Department, Massachusetts Institute of Technology. May, 1988.

Edward H. Adelson, Eero P. Simoncelli.  Subband Image Coding with
   Three-tap Pyramids.  Picture Coding Symposium, 1990.  Cambridge, MA.

USAGE:
------

Typing "epic" gives a description of the usage of the command:

 epic infile [-o outfile] [-x xdim] [-y ydim] [-l levels] [-b binsize]

An example call might look like this:
 
 epic /images/lenna/data -o test.E -x 512 -y 512 -l 5 -b 33.45

Note that:

	1) the infile argument must be a file containing raw image data
	2) this file must be an 8 bit file (not 32 bit or float)
	3) if the size of the image is different than 256x256, you
	   must specify it on the command line.  Currently, the code is
	   limited to square images only.
	4) the binsize can be any floating point number.  Larger
	   numbers give higher compression rates, smaller numbers
	   give better image quality.  Using a binsize of zero should
	   give perfect reconstruction.
	5) Color images can be compressed best by converting from rgb
	   to yiq and compressing each of the components separately.

The decompression command "unepic" is much easier to use.  Typing
"unepic test.E" will create a raw 8bit data file called "test.E.U".
If you don't like that name, you can specify a different name as an
optional second argument.

-- 
Eero Simoncelli
Vision Science Group
MIT Media Laboratory, E15-385
Cambridge, MA  02139