cn@allgfx.agi.oz (Con Neri) (01/04/91)
About a month ago I asked the users of this group for certain information about the JPEG compression algorithm. As promised here is the results of the query. Should anyone have any further information please send it either to this group or e-mail it to me? Many thanks go to Tom and Julien for their replies. From: Tom.Lane%G.GP.CS.CMU.EDU@munnari Documents are a bit hard to come by. The Sept. 90 Comm. ACM has a sample photo (page 32); the same article references a "technical overview" article in the proceedings of the 1990 SPIE symposium on electronic imaging science and technology. (I haven't seen the overview.) The January 1991 MacWorld also has a useful article. You can order a copy of the draft standard from the X3 subcommittee of your local ISO/CCITT organization. (If you can't find who that is down in Oz, I have the US address.) The executive summary is: 1. Transform image into YUV color space, work on each color component separately. 2. Divide picture into 8x8 pixel blocks. Optionally subsample the U and V components, relying on the eye's lower sensitivity to chrominance changes. 3. Transform each 8x8 block through a discrete cosine transform (DCT); this is a relative of FFT and likewise gives a frequency map (with 8x8 components). 4. Quantize *in the frequency domain*, by dividing each of the 64 frequency components by a separate quantization coefficient, and rounding the results to integers. This is a "lossy" step in the sense that it is not 100% reversible. Typically the higher frequencies are reduced more than the lower. 5. Huffman or arithmetic code the results. (If that didn't make sense to you, don't bother trying to read the draft standard.) I'm heading up a group that intends to produce a freely available JPEG implementation. We have only limited experience so far, but it seems that images can be represented in about one-fifth the space needed by GIF. If you have access to internet FTP you can retrieve our discussions and prototype code by anonymous FTP to think.com; look under directory jpeg. -- tom lane Internet: tgl@cs.cmu.edu UUCP: <your favorite internet/arpanet gateway>!cs.cmu.edu!tgl BITNET: tgl%cs.cmu.edu@cmuccvma CompuServe: >internet:tgl@cs.cmu.edu ------------------------------------------------------------------------ From: <jnicolas%ATHENA.MIT.EDU@munnari> You can obtain the JPEG draft from several places. It is still a draft although some manufacturers have gone ahead and implemented the algorithm in hardware (e.g. C-Cube Microsystems). One such place is JPEG Draft Technical Specification X3 Secretartiat: Computer and Business Equipment Manufacturers Association 311 First Street NW, Suite 500 Washington, DC 20001-2178 The JPEG (Joint Photographic Experts Group) draft standard implements a DCT based image compression algorithm. The general ideas are very standard and can be found in almost any image digital image processing textbook. However, a lot of effort went into optimizing the algorithm both for speed and subjective quality. Typical compression ratios are around 30:1 for 24 bit RGB images with reasonably high resolution. Higher compression ratios are of course selectable. The image quality degrades quite gracefully as compression ratios are increased. Hope this helps. Julien ---------------------------------------------------------------------- CON NERI All Graphic R+D e-mail: cn@allgfx.agi.oz.au 49-53 Barry ST tele: +61-3-3471722 Carlton fax : +61-3-3472175 Vic 3053 AUSTRALIA
flint@gistdev.gist.com (Flint Pellett) (01/11/91)
cn@allgfx.agi.oz (Con Neri) writes: >From: <jnicolas%ATHENA.MIT.EDU@munnari> >You can obtain the JPEG draft from several places. It is still a draft >although some manufacturers have gone ahead and implemented the >algorithm in hardware (e.g. C-Cube Microsystems). One such place is >JPEG Draft Technical Specification >X3 Secretartiat: Computer and Business Equipment Manufacturers Association >311 First Street NW, Suite 500 >Washington, DC 20001-2178 >The JPEG (Joint Photographic Experts Group) draft standard implements a >DCT based image compression algorithm. The general ideas are very >standard and can be found in almost any image digital image processing >textbook. However, a lot of effort went into optimizing the algorithm >both for speed and subjective quality. Typical compression ratios are >around 30:1 for 24 bit RGB images with reasonably high resolution. >Higher compression ratios are of course selectable. The image quality >degrades quite gracefully as compression ratios are increased. This doesn't say how much time is required to do the uncompression. (I could care less if it takes 20 minutes to compress it, as long as it can be uncompressed rapidly.) Before JPEG can displace something like GIF, it's going to have to be able to uncompress images in something less than 5 seconds on a typical PC, (and obviously, if you want it to handle animation, you're going to have to get down in the fractional second range.) I've not seen any concrete data on decompression speed other than one article that quoted times around 2 minutes (yeech!) by some package I don't remember the name of. Can anyone who knows tell us what speed is available right now, both from software and from hardware solutions? Thanks. -- Flint Pellett, Global Information Systems Technology, Inc. 1800 Woodfield Drive, Savoy, IL 61874 (217) 352-1165 uunet!gistdev!flint or flint@gistdev.gist.com