The Uncertainty/Noise/Error Created by JPEG Compression in Astronomical Data

With special emphasis on HINODE/XRT

Last updated October 11, 2011

A citable discussion can be found here or here


Brief Overview

JPEG compression is a lossy image compression algorithm and thus is used in mainly for non-astronomical purposes, such as in consumer digital photography. Wikipedia is a great resource for this use of JPEG.

Unfortunately, in some astronomical circumstances, lossless compression cannot support the needs of the users. This is an all too common case in satellite telescopes, such as the TRACE and Hinode satellites, where communication time between the satellite and ground stations are precious commodities. In the case of Hinode, compression became necessary when the satellite's main antenna failed.

JPEG compression is more than acceptable for many casual uses, as it was specifically designed to minimize the difference the human eye can perceive. This is done by limiting color resolution and removing high frequency signals. Astronomical images do not contain color maps, so JPEG cannot save space in these areas. So the size savings occurs by removing high frequency signals from the data.

The compression occurs by first splitting up the image into subsections. The data is first centered around 0 by subtracting a set value from all points; the value is based on the bit size of the pixel. For example, for 8 bit pixels, 128 would be subtracted from each value (2^8=256, the largest number the pixel can store), thus centering the pixel values around 0. The subsections are typically 8 pixels wide by 8 pixels tall. A Discrete Cosine Transform (DCT) is then performed on the centered subareas of the image, thus translating the image into frequency space. The algorithm will then divide the subimage by a "quantization table" (see example here) of values which effectively removes high frequency signals in the data. The higher frequencies will be divided by a larger factor, which cause the value to become 0, when kept as integers. This means a large chunk of the DCT of the subimage will be zeroes. It is the 'quantization table' that defines the compression level. The 'quantization tables' are not necessarily standard between programs performing compression. We don't store the final zeroes, thus saving space and creating a compressed image. Decompression is done by reversing these steps.

Compressed File Losses

Due to the nature of the compression algorithm, size of a compressed image is not easily forecasted. Since the compression works to reduce the high frequency signals, an image with a large amount of high frequency signal will not compress as much as a very simple image. These high frequency images will also suffer the most compression artifacts. An easy way to see these artifacts is by compressing an image containing a lot of text. The hard edge of the lettering will become blurred and the stored size of the image will not be as small as other images.

This suggests that the compression will in some ways "blur" the image. It will spread the total intensity of an 8 by 8 macropixel throughout the subimage. This is seen within the DCT. The first element (0,0) ( k1 = k2 = 0 as shown here) is just the sum of the intensities within the whole subimage and is the element least impacted by the quantization table, and is more or less conserved with compression. Removing the high frequency parts of the signal causes this total flux to spread back throughout the subimage.

Results

The results here are based on tests with the JPEG compression used on the HINODE spacecraft. Q100, Q98, Q95, and Q92 compressions are examined. The data compressed for testing were 512x512 images from the XRT instrument onboard the HINODE spacecraft. The data were taken from 2 days worth of observations, taken on the 11 December 2006 and 11 February 2007. These data were chosen becaues they had only been compressed using a lossless DPCM compression, and the data contain compact active regions, thus allowing us to compress a large range of data values (bright and dark). Well over 1000 images were compressed. We will focus on the quantitative differences between the original images and their compressed counterparts.

If we simply plot a histogram of the discrepancy, we get a curve that is not well defined by a single gaussian, but by at least two. A single gaussian is necessary for the outer wings of the distribution, and a more narrow Gaussian is needed for the central core. This is illustrated in figure 0. Since this distribution is not easily described by a single Gaussian distribution, it is not accurate to use this as the uncertainty. We need to look deeper.

Figure 0: A raw histogram of the compressed image discrepancy. Two separeate Gaussians are used to create the overall fit to the data. The equations of the Gaussian are shown in legend.

A "Lossless" JPEG compression is performed by using a quantization table made up solely of 1's (or similar). Figure 1 shows a typical XRT image from our data set and its Q100 counterpart. This compression will still create some numerical roundoff error, due to the DCT coefficients being kept as integers. This effect is seen in figure 2 below.

Figure 1: On the left you see a typical XRT image and its Q100 compressed counterpart on the right. Both are identical on visual inspection


Figure 2: Error in Q100, a near lossless JPEG compression. Shown is a histogram of average absolute error (abs(orig-compressed)) within individual 8x8 subimages of XRT data as compressed by an algorithm to mimic the on-board compression. The peak absolute error is approximatedly 0.3 DN, indicating that on average most pixels differ from their uncompressed counterparts by approximately 0.3 DN

The discrepancy is affected by the specific coding of the compression algorithm used, as a different JPEG compression algorithm can alter the discrepancy, as shown in figure 3 below. The compressed images were visually identical. The difference is primarily due different numerical precisions used in the two methods.

Figure 3: Error in Q100, via a different version of the compression, to illustrate how different rounding errors can affect the result. This compression method was created in IDL using very large number types, i.e. double precision instead of floating point. Note that the curve peaks at a lower value, 0.08 DN instead of 0.3 DN

A higher level compression using a different Q table (removing more high frequency signal) causes the discrepancy between the original image and the compressed image to grow. Figures 4 and 5 are the discrepancy in Q98 and Q92 compressed images respectively.

Figure 4: Q98 Discrepancy

Figure 5: Q92 Discrepancy, noise no longer appears to be a single simple Gaussian. Q95 compression also shows this double peaked feature.

The JPEG noise seems to follow bright regions, but the results are not definitive. Figure 6 shows a side by side plot of an image and the (scaled) absolute value of the difference between the original image and the compressed image. Most images appear to have larger errors in bright areas. Many show it more readily than the image in figure 6, some show it less. This tendency is NOT true. The bright macropixels do not have larger discrepancies than smaller ones, as illustrated in figure 7.

They seem to possibly be related to the difference between the minimum and maximum values in the subimage. This is suggested by noting in figure 7, the points near an average of 3700 DN that have lower errors than average. These outliers disappear in figure 8, where the abscissa is the difference between the minimum and maximum values in a subimage.

Figure 6: The original uncompressed XRT image is on the left. The right shows the absolute difference between the image and its Q92 compressed version, scaled by a factor of 100. The larger discrepancy most commonly appears to occur in regions with the most change within the 8x8 pixels. See the text for discussion.

Figure 7: Average value in an 8 by 8 pixel vs. average discrepancy for Q92 JPEG compression. It appears that for signals between 200 and 3600DN, the brightest and dimmest regions of a compressed image have the same average absolute error. This trend appears with other compressions as well. The errors of ~1DN for low average values is left unexplained. The text contains a potential explanation of the ~1DN error near 3700 DN.

Figure 8: Difference between a subimage's maximum and minimum values vs. average discrepancy for Q92 JPEG compression. A very stable and dense region of error is seen when plotted in this way. The more sporadic (and often lower) errors occur when the variance of an 8x8 subimage is less than 100 DN.

If we look at histograms of the average macropixel discrepancy in macropixels for individual values of the max-min value within a macropixel (see here), we can see that the histograms follow a fairly simple gaussian. The only real deviation is from low counts for individual values. This suggests the uncertainty is well behaved in this context. By plotting the center of these Gaussians as a function of the max-min macropixel value, we can get very reasonable curves to estimate the uncertainty within a macropixel. Two examples are shown in Figures 9 and 10.

Figure 9: Fit showing how the uncertainty scales with max-min value in a macropixel for Q95 compression. The text explains how the curve was made. The curve flattens out at a max-min value of ~35DN.

Figure 10: Fit showing how the uncertainty scales with max-min value in a macropixel for Q65 compression. The errors are much larger, and the funciton does not flatten out until a much larger value.

Conclusions

While not complete or theoretically based, it appears that the noise created by JPEG compression is somewhat Gaussian at low compression levels, making describing the uncertainties pertaining to it somewhat simple to develop. Expected uncertainties can be developed with the method of looking for the high frequency signals with the max-min value of the macropixel as a proxy. I have not included all of the results, as for other JPEG compressions, the results will be different, though the same methods of determination should apply.

If any of this is useful to anyone but me, please let me know, as I will put more effort into describing the noise. At the current time it is almost accurately enough described for the purpose I need, so prodding will be needed for further analysis. If you want to steal this, let me know. It will make me look better to my advisor if this work gets used in a variety of places. My email is at my personal site below. I also have a slew of compression programs if you so desire.

[Go to Adam's Home Page]