[comp.compression] Astronomical Data Compression

bglenden@colobus.cv.nrao.edu (Brian Glendenning) (03/25/91)

[I have cross-posted to sci.astro]

In article <1991Mar23.013557.28151@nntp-server.caltech.edu> sns@lo-fan.caltech.edu (Sam Southard Jr.) writes:

>I have a topic/question that should be suitable for this newsgroup.  I am a
>software engineer working on the software for the Keck Telescope (the new 10m
>in Hawaii).  One of the things I am doing is taking the data from the CCD to
>a VMEbus controller crate (Sun 1E running VxWorks) and from there to a
>Sun 4/470 over the ethernet.
>
>Each image can be up to 4096x4096 pixels (a 2x2 mosaic of 2048x2048 CCDs), each
>pixel being 16 bits.  Obviously, if this kind of data is going over the
>Ethernet, we want to compress it as much as possible.
> [...]
>Sam Southard, Jr.
>{sns@deimos.caltech.edu|{backbone}!cit-vax!deimos!sns}

Whatever you do, you don't want a lossy technique for transferring
actual data (as opposed to guiding images or something similar).

Since most of the sky is "blank" (bias +/- a few noise bits), and
since those parts that aren't blank have redundant information (i.e.
the psf covers several to many pixels) I have often thought you could
do well by sending a model (bias plus source list plus psf
approximation) and then encode the image as (actual - model).
Hopefully most pixels could be represented by a few bits (obviously
you'd have to come up with an efficient scheme for encoding variable
width bit patterns). Of course your model could also include things
like saturated columns, extended emission surfaces, cosmic ray hits
etc., although if your model gets too complicated you start to lose in
transmitting it. Another advantage of a scheme like this is that you
can transmit the model first and send the residuals later, to let the
image "build up" for the impatient "observer." Anyway, this is a
pretty simple concept, better probably exist. Let us know what you
find out!

Brian
--
       Brian Glendenning - National Radio Astronomy Observatory
bglenden@nrao.edu          bglenden@nrao.bitnet          (804) 296-0286

U5F97@wvnvm.wvnet.edu (Jeff Brooks) (03/26/91)

The topic of astronomical image data compression is near and dear
to my heart.  I did an internship at NRAO (Socorro) in 1985 looking
at techniques for transmitting image data from a Cray in LA to
the VLA over a low speed phone line.  We looked at iterative
schemes for sending the images in progressive detail based on
an initial "model".  The initial model could be flat sky, a fitted
model from image deconvolution or maximum entropy methods, whatever.
You could either transmit the image at low spatial resolution and
double the spatial resolution at each step, or increase the datum
resolution.  We implemented the former under AIPS (Atronomical Image
Processing System);  there were a lot of improvements to make, but it
gave a very quick look at the overall image while allowing the image
to be sent eventually at up to full spatial resolution.

   Jeffrey A. Brooks                  (304) 293-5192
   IBM Systems Group                  u5f97@wvnvm.wvnet.edu
   West Virginia Network for Educational Telecomputing
   Morgantown, West Virginia  26505

ahenden@magnus.ircc.ohio-state.edu (Arne A Henden) (04/03/91)

Don Wells indicates he achieved 40 percent compression with
even/odd byte splitting.  For clarification, I assume that
Don was referring to high order / low order byte splitting
for the HST file in question.

You can achieve that kind of compression as long as the
noise level is contained in the low-order byte (i.e., less
than 255 ADU).  If your mean level is higher than 255, you
will start getting a lower compression rate.  Also, it
depends on whether the image is of a stellar field or
an extended object, where more pixels are above sky.

I like Don's scheme of splitting a floating-point image
into exponent and mantissa for separate encoding.  Has
anyone attempted such a compression?  With the current
round of CCDs, the dynamic range is really more like 18 bits
and I anticipate having to use f.p. for storage of raw data
as well as reduced images.  On the surface, I'd bet that the
compression efficiency would be less than bit-plane since
the mantissa 24bits would be random and only the 8bit exponent
would give high efficiency.  Also, bit-plane storage of 18-bit
images would certainly be more storage-efficient than 24-bits
per pixel, but at higher CPU cost.

Arne Henden

warnock@stars.gsfc.nasa.gov (Archie Warnock) (04/04/91)

In article <1991Apr3.150357.20825@magnus.acs.ohio-state.edu>,
ahenden@magnus.ircc.ohio-state.edu (Arne A Henden) writes... 
>than 255 ADU).  If your mean level is higher than 255, you
>will start getting a lower compression rate.  Also, it
>depends on whether the image is of a stellar field or
>an extended object, where more pixels are above sky.

Yes, but (depending on the angular size of the pixels), the compression 
rate goes down only slowly, because the high-order bytes are still 
relatively "smooth".  The low order bytes should always be noisier than 
the high order bytes.

>anyone attempted such a compression?  With the current
>round of CCDs, the dynamic range is really more like 18 bits
>and I anticipate having to use f.p. for storage of raw data
>as well as reduced images.  On the surface, I'd bet that the
>compression efficiency would be less than bit-plane since
>the mantissa 24bits would be random and only the 8bit exponent

Same argument - looking at the mantissa 8 bits at a time should give you 
reasonable compression except on the lowest order byte.  Note, all this 
assumes that if you can't get reasonable compression directly, try 
running the respective bytes through some sort of run-length or 
running-differences scheme first.  That's where you really take 
advantage of the smoothness.

----------------------------------------------------------------------------
-- Archie Warnock                     Internet:  warnock@stars.gsfc.nasa.gov
-- ST Systems Corp.                   SPAN:      STARS::WARNOCK
-- NASA/GSFC                          "Unix - JCL for the 90s"