In the CAWSES presentation, I calculated the irradiance from the latest SSC images(ssc16) for the 1-8 A wavelength band, and compared with the variation of GOES low-band flux throughout the mission. There I found that the SXT irradiance is systematically smaller than that of GOES, and that their difference increases toward the end of the mission (except the period of solar minimum, where the GOES flux is below the detection limit). See figure 1.
We assume GOES/XRS provides the X-ray flux with stable sensitivity over the decades, because of its structural simplicity (ion-chamber). From this view point, the deviation of our irradiance from GOES might reflect our instrumental deterioration or failure, which is not covered in the current calibration method. We can say that our study after CAWSES has been spent to understand this difference and try to improve our calibration and our irradiance curve using GOES as a reference.
Loren generously provided a series of codes he made for his study (with J. Lean? in 2003?). For the CAWSES, I modified them so that they work with the latest data (ssc16 at the time of CAWSES), and calculated the irradiance for 1-8A band, to make comparison with GOES low band flux.
Later, I personally added the research on the effect of data during SAA and the largeness of the field collecting the signals. It turned out both have little effect on the irradiance study, as long as daily average is taken (SAA), and signals are collected from larger than 1.2Rs. Now I have my own procedure to process images to derive irradiance, in which the parameters in selecting images are as follows.
Derived signals are stored in the IDL structure array, together with GOES low-band flux value observed closest to each signal data. They are not daily averaged nor paired (Al.1 - AlMg) and used for the study made with signals (i.e., SXT signal v.s. GOES comparison).
To convert signals to irradiance, we take the following steps.
As the first step to understand the difference of our irradiance from GOES, I compared the Al.1 and AlMg signals with GOES flux. It turned out both the Al.1 and AlMg signals have similar trend, i.e., deviation from GOES.
I also compared the signals from the thicker filters (Al12 and Be), since they are considered less effected by the change of instrument sensitivity due to the increased X-ray flux through the failed entrance filter. It is confirmed that the signals from the thick filters also have increased deviation from GOES with time, but the deviation is milder than the thin filter cases.
As the next step, we looked closely the scatter plots between SXT thin filter signals (Al.1 or AlMg) and GOES low-band flux. It was found that there are lots of small shifts/jumps in the cluster of the data points in the scatter plots, i.e, the sudden change of SXT signal level relative to GOES (increased signals in most cases, see example). Because of those shifts, the shape of the scatter plot for the whole Yohkoh mission lacks symmetry for descending(1991-1996) and rising(1996-2001) phases of the solar cycle (figure 2 and figure 3). (Memo for myself: ~takeda/idlpro/y_legacy/signal_study/scat_descend.pro, scat_ascend.pro, etc.)
I identified 26 such shifts by screening the scatter plot throughout the mission (Takeda's YLA report of 24-Sep). Those shifts are considered to come from the change of the instrument sensitivity and also from the true change of coronal condition (new growing of an active region, etc). The above 26 events include more cases than reported in the get_yo_dates, which are determined based on the terminator images, i.e., optical portion of the stray light). However, it turned out hard to determine the exact timing of the shifts from the scatter plots, nor to distinguish the events due to the change of activity from the ones due to the instrumental effect.
In our current calibration method, the change of instrument sensitivity is corrected in the derivation of the filter ratio temperature (sxt_teem) in the form of the correction of the response function for each filter. Trying to evaluate the effect of sensitivity change, I calculated the irradiance for two extreme cases, i.e., creating response function with no-failed case and totally failed case. Surprisingly, it turned out that two extreme response curves make reasonable difference in the derived temperature, but make little difference in the calculated irradiance.
In an effort to find what is the major factor determining the irradiance curve, I looked into the 2nd order leak correction, which is performed after the regular leak correction in the sxt_prep. Through the study using Level_1 images, it turned out that the 2nd leak correction generally add/subtract the intensity that corresponds in total to 3 to 5 % of the total signal of an image. However, it also turned out that with/without 2nd leak correction make no significant difference in the temperature derived from the Al.1/AlMg filter ratio (cf, Takeda's YLA report on 22-Oct). (Memo for myself: ~takeda/idlpro/y_legacy/chk_lk2/chk_lk2.pro)
After some detective work, it turned out that the emission measure, which is multiplied to the irradiance calculated for unit emission measure, is a main factor that effects the curve of irradiance. In our method, emission measures are derived using intensities from a single filter (either Al.1 or AlMg, in our case). The emission measures we derived are therefore modulated by the instrumental effect on signals, such as change of sensitivity of telescope or CCD.
We thereby realize that the signal curve free from instrumental effects will settle the deviation from GOES. However, there is no solid scheme to remove or correct the instrumental effects. I have some 'rough' ideas but their scientific validity need to be checked.