JPEG compression is a lossy image compression algorithm and thus is used in mainly for non-astronomical purposes, such as in consumer digital photography. Wikipedia is a great resource for this use of JPEG.
Unfortunately, in some astronomical circumstances, lossless compression cannot support the needs of the users. This is an all too common case in satellite telescopes, such as the TRACE and Hinode satellites, where communication time between the satellite and ground stations are precious commodities. In the case of Hinode, compression became necessary when the satellite's main antenna failed.
JPEG compression is more than acceptable for many casual uses, as it was specifically designed to minimize the difference the human eye can perceive. This is done by limiting color resolution and removing high frequency signals. Astronomical images do not contain color maps, so JPEG cannot save space in these areas. So the size savings occurs by removing high frequency signals from the data.
The compression occurs by first splitting up the image into subsections. The data is first centered around 0 by subtracting a set value from all points; the value is based on the bit size of the pixel. For example, for 8 bit pixels, 128 would be subtracted from each value (2^8=256, the largest number the pixel can store), thus centering the pixel values around 0. The subsections are typically 8 pixels wide by 8 pixels tall. A Discrete Cosine Transform (DCT) is then performed on the centered subareas of the image, thus translating the image into frequency space. The algorithm will then divide the subimage by a "quantization table" (see example here) of values which effectively removes high frequency signals in the data. The higher frequencies will be divided by a larger factor, which cause the value to become 0, when kept as integers. This means a large chunk of the DCT of the subimage will be zeroes. It is the 'quantization table' that defines the compression level. The 'quantization tables' are not necessarily standard between programs performing compression. We don't store the final zeroes, thus saving space and creating a compressed image. Decompression is done by reversing these steps.
Due to the nature of the compression algorithm, size of a compressed image is not easily forecasted. Since the compression works to reduce the high frequency signals, an image with a large amount of high frequency signal will not compress as much as a very simple image. These high frequency images will also suffer the most compression artifacts. An easy way to see these artifacts is by compressing an image containing a lot of text. The hard edge of the lettering will become blurred and the stored size of the image will not be as small as other images.
This suggests that the compression will in some ways "blur" the image. It will spread the total intensity of an 8 by 8 macropixel throughout the subimage. This is seen within the DCT. The first element (0,0) ( k1 = k2 = 0 as shown here) is just the sum of the intensities within the whole subimage and is the element least impacted by the quantization table, and is more or less conserved with compression. Removing the high frequency parts of the signal causes this total flux to spread back throughout the subimage.
The results here are based on tests with the JPEG compression used on the HINODE spacecraft. Q100, Q98, Q95, and Q92 compressions are examined. The data compressed for testing were 512x512 images from the XRT instrument onboard the HINODE spacecraft. The data were taken from 2 days worth of observations, taken on the 11 December 2006 and 11 February 2007. These data were chosen becaues they had only been compressed using a lossless DPCM compression, and the data contain compact active regions, thus allowing us to compress a large range of data values (bright and dark). Well over 1000 images were compressed. We will focus on the quantitative differences between the original images and their compressed counterparts.
If we simply plot a histogram of the discrepancy, we get a curve that is not well defined by a single gaussian, but by at least two. A single gaussian is necessary for the outer wings of the distribution, and a more narrow Gaussian is needed for the central core. This is illustrated in figure 0. Since this distribution is not easily described by a single Gaussian distribution, it is not accurate to use this as the uncertainty. We need to look deeper.
A "Lossless" JPEG compression is performed by using a quantization table made up solely of 1's (or
similar). Figure 1 shows a typical XRT image from our data set and its Q100 counterpart. This
compression will still create some numerical roundoff error, due to the DCT coefficients
being kept as integers. This effect is seen in figure 2 below.
A higher level compression using a different Q table (removing more high frequency signal) causes the discrepancy between the original image and the compressed image to grow. Figures 4 and 5 are the discrepancy in Q98 and Q92 compressed images respectively.
The JPEG noise seems to follow bright regions, but the results are not definitive. Figure 6 shows a side by side plot of an image and the (scaled) absolute value of the difference between the original image and the compressed image. Most images appear to have larger errors in bright areas. Many show it more readily than the image in figure 6, some show it less. This tendency is NOT true. The bright macropixels do not have larger discrepancies than smaller ones, as illustrated in figure 7.
They seem to possibly be related to the difference between the minimum and maximum values in the subimage. This is suggested by noting in figure 7, the points near an average of 3700 DN that have lower errors than average. These outliers disappear in figure 8, where the abscissa is the difference between the minimum and maximum values in a subimage.
If we look at histograms of the average macropixel discrepancy in macropixels for individual values of the max-min value within a macropixel (see here), we can see that the histograms follow a fairly simple gaussian. The only real deviation is from low counts for individual values. This suggests the uncertainty is well behaved in this context. By plotting the center of these Gaussians as a function of the max-min macropixel value, we can get very reasonable curves to estimate the uncertainty within a macropixel. Two examples are shown in Figures 9 and 10.
While not complete or theoretically based, it appears that the noise created by JPEG compression is somewhat Gaussian at low compression levels, making describing the uncertainties pertaining to it somewhat simple to develop. Expected uncertainties can be developed with the method of looking for the high frequency signals with the max-min value of the macropixel as a proxy. I have not included all of the results, as for other JPEG compressions, the results will be different, though the same methods of determination should apply.
If any of this is useful to anyone but me, please let me know, as I will put more effort into describing the noise. At the current time it is almost accurately enough described for the purpose I need, so prodding will be needed for further analysis. If you want to steal this, let me know. It will make me look better to my advisor if this work gets used in a variety of places. My email is at my personal site below. I also have a slew of compression programs if you so desire.