[comp.graphics] Request info on DCT based compresseion/decompression

ksakotai@hawk.ulowell.edu (Krishnan "krish" Sakotai) (08/16/90)

I have been following the discussions on the net reg JPEG standard for data
compression/decompression. I wonder if anyone on the net has come across
code( maybe even quite basic ) to do DCT( discrete cosine Xform). If so
could I get a copy of it ? Any pointers in that direction are welcome.
Please email. If needed, I'll post a summary.

Thanks.

Krishnan C.Sakotai
Interactive Media Lab
University of Lowell.
ksakotai@hawk.ulowell.edu

djs@nimbus3.uucp (Doug) (08/18/90)

In article <1233@swan.ulowell.edu> ksakotai@hawk.ulowell.edu (Krishnan "krish" Sakotai) writes:
>
>I have been following the discussions on the net reg JPEG standard for data
>compression/decompression. 

I have also been following the discussion and have a few questions (I did
try to look up the article in IEEE without success).

My understanding is that the JPEG compression algorithm is for still pictures
and that the motion-picture compression algorithm was being developed by
MPEG but would not be available for a year or two.  Yet the chip from C-cube
says that they achieve 40 fold compression on frame to frame.  If this is
true, what algorithm are they using for the frame to frame compression?

Second, if much of the compression is from frame to frame redundancy, what
happens in multi-media applications that want to jump around to various
points in time?  Does that mean that after it first jumps to a new time
point the display will be blury because it doesn't have the previous 
information or do they jump to some time before the desired point to
get the previous information?

-- 
Doug Scofea   Email: nimbus3!djs@cis.ohio-state.edu    Phone:+1 614 459-1889

ouij@xurilka.UUCP (exhausted jazz surfer) (08/19/90)

In article <1990Aug17.210410.28762@nimbus3.uucp> djs@nimbus3.UUCP (Doug) writes:
>In article <1233@swan.ulowell.edu> ksakotai@hawk.ulowell.edu (Krishnan "krish" Sakotai) writes:
>>
>>I have been following the discussions on the net reg JPEG standard for data
>>compression/decompression. 
>
>I have also been following the discussion and have a few questions (I did
>try to look up the article in IEEE without success).
>

Hi, i am rather new to this group and have missed the 
discussions about JPEG.

Can someone email me information as
to where documentation can be obtained
describing JPEG in detail or any
articles that discuss it?  I am
curious about JPEG since Adobe
uses it as one of their compression schemes
in the new level II PostScript. 


Thanks

Ouij
			xurilka!ouij@larry.mcrcim.mcgill.edu

marcos@netcom.UUCP (Marcos H. Woehrmann) (08/21/90)

In some old article, djs@nimbus3.uucp (Doug) writes:
> 
> My understanding is that the JPEG compression algorithm is for still pictures
> and that the motion-picture compression algorithm was being developed by
> MPEG but would not be available for a year or two.  Yet the chip from C-cube
> says that they achieve 40 fold compression on frame to frame.  If this is
> true, what algorithm are they using for the frame to frame compression?
> 
> Second, if much of the compression is from frame to frame redundancy, what
> happens in multi-media applications that want to jump around to various
> points in time?  Does that mean that after it first jumps to a new time
> point the display will be blury because it doesn't have the previous 
> information or do they jump to some time before the desired point to
> get the previous information?
> 

JPEG and MPEG are both currently being drafted.  JPEG is much further
along; draft Release 5 has been out for 3 months and release 8 is due out
at the end of Oct. (there was no Release 6 or 7, as far as I can tell). 
The final draft will be out early in 1991 (I suppose it shouldn't really
be called a draft if it final).  The first MPEG draft is supposed to be
released near that time too.
 
C-Cube's current silicon implementation is JPEG R5 and is indeed a still
frame image compression technique.
 
However what the C-Cube chip can do is to decompress an image in less
than a 30th of second.  So they can simulate video by continously
decompressing and displaying frames.  But because JPEG does not rely on
previous frames they only get 20 to 40 fold compression (I say only
because MPEG will do much better than that).
 
What I've heard is that to allow interactivity MPEG will have the ability
to have certain frames designated as "entry" frames.  The software can
then jump to these frames from anywhere and not have to rely on previous
frames.  Yes, the scene will be "fuzzy" (chunky?) for a few frames if you
are using a severly bandwith limited devices (CD-ROMs) (otherwise the
system will temporarily require more data).  But remember this is the
same as having a scene change (well, not quite the same, since the system
has pre-knowledge of the scene change and could pre-initialize the new
scene (though I don't know if MPEG will allow that].

For those of you who have seen the Philips CD-I demo of Batman I suspect
the artifacts will be similar to the chunkiness exhibited during scene
changes: not particularly noticable unless your looking for it (remember,
in the U.S. most people think VHS looks good).

marcos

(Oh, yes, I don't speak for JPEG, MPEG, or C-Cube (nor anyone else
even mildly important, for that matter].

-- 
Marcos H. Woehrmann    {claris|apple}!netcom!marcos  or  marcos@netcom

  "These are but a few examples of what can happen when the human mind is
   employed to learn, to probe, to question as opposed to merely keeping
   the ears from touching." -- rec.humor.funny 90.07.16