[comp.compression] RE JPEG info

venn@iobj.uucp (Gary Venn) (05/30/91)

--
Ping Huang writes:
>I was wondering if anyone here knew of any articles in journals/magazines or
>would be able to give a brief but lucid explanation of the compression
algorithm
>being used by the JPEG standard.

Check out the April/91 issue of ACM

The basic JPEG algorithms, called the baseline system, consist of three parts:
a DCT (discrete cosine transform), quantizer, and huffman encoder. This system
is lossy and gives about 20 to 1 compression on a straight 24bit color image.
The standard also includes several other compression paths amoung which are a
lossless algorithm, and a path with an arithmetic coder replacing the Huffman 
coder.

The baseline system basically works as follows. The image is first carved into 
contiguous 8x8 data blocks and sampled according to the users wishes; ie
skipping blocks of certain color components relative to other components. Then
each block is run through the DCT which transforms an 8x8 2D spacial block into
an 8x8 block of cosine frequency coefficiants. As one traces the resulting
transformed matrix in a left to right and top to bottom fashion, one runs 
through coefficiants corresponding to higher and higher frequencies. Because
the
human eye does not register these higher frequencies, they can be chopped off 
by the quantizer. This and the above sampling ratios are the key to the JPEG 
baseline compression ratios. It is also where your most significant loss takes 
place. The huffman coder then achieves high compression ratios by taking 
advantage of the high frequency count of zero values.	
___________________________________________________________________________
Garrison M. Venn
	Interactive Objects Software GmbH (iO), Elzach, Germany
	Email: venn@iobj.uucp 
	Internet: venn%iobj.uucp@unido.informatik.uni-dortmund.de
___________________________________________________________________________