pypeit.images.combineimage module

Class to generate an image from one or more files (and other pieces).

class pypeit.images.combineimage.CombineImage(rawImages, par)[source]

Bases: object

Process and combine detector images.

Parameters:
  • rawImages (list, PypeItImage) – Either a single PypeItImage object or a list of one or more of these objects to be combined into an image.

  • par (ProcessImagesPar) – Parameters that dictate the processing of the images.

det

The 1-indexed detector number(s) to process.

Type:

int, tuple

par

Parameters that dictate the processing of the images.

Type:

ProcessImagesPar

rawImages

A list of one or more RawImage objects to be combined.

Type:

list

property nimgs

The number of files in files.

run(ignore_saturation=False, maxiters=5)[source]

Process and combine all images.

All processing is performed by the RawImage class; see process().

If there is only one file (see files), this simply processes the file and returns the result.

If there are multiple files, all the files are processed and the processed images are combined based on the par['combine'], where the options are:

  • ‘mean’: If sigma_clip is True, this is a sigma-clipped mean; otherwise, this is a simple average. The combination is done using weighted_combine().

  • ‘median’: This is a simple masked median (using numpy.ma.median).

The errors in the image are also propagated through the stacking procedure; however, this isn’t a simple propagation of the inverse variance arrays. The image processing produces arrays with individual components used to construct the variance model for an individual frame. See Basic Image Processing and variance_model() for a description of these arrays. Briefly, the relevant arrays are the readnoise variance (\(V_{\rm rn}\)), the “processing” variance (\(V_{\rm proc}\)), and the image scaling (i.e., the flat-field correction) (\(s\)). The variance calculation for the stacked image directly propagates the error in these. For example, the propagated processing variance (modulo the masking) is:

\[V_{\rm proc,stack} = \frac{\sum_i s_i^2 V_{{\rm proc},i}}\frac{s_{\rm stack}^2}\]

where \(s_{\rm stack}\) is the combined image scaling array, combined in the same way as the image data are combined. This ensures that the reconstruction of the uncertainty in the combined image calculated using variance_model() accurately includes, e.g., the processing uncertainty.

The uncertainty in the combined image, however, recalculates the variance model, using the combined image (which should have less noise) to set the Poisson statistics. The same parameters used when processing the individual frames are applied to the combined frame; see build_ivar(). This calculation is then the equivalent of when the observed counts are replaced by the model object and sky counts during sky subtraction and spectral extraction.

Bitmasks from individual frames in the stack are not propagated to the combined image, except to indicate when a pixel was masked for all images in the stack (cf., ignore_saturation). Additionally, the instrument-specific bad-pixel mask, see the bpm() method for each instrument subclass, saturated-pixel mask, and other default mask bits (e.g., NaN and non-positive inverse variance values) are all propagated to the combined-image mask; see build_mask().

Warning

All image processing of the data in files must result in images of the same shape.

Parameters:
  • ignore_saturation (bool, optional) – If True, turn off the saturation flag in the individual images before stacking. This avoids having such values set to 0, which for certain images (e.g. flat calibrations) can have unintended consequences.

  • maxiters (int, optional) – When par['combine']='mean') and sigma-clipping (sigma_clip is True), this sets the maximum number of rejection iterations. If None, rejection iterations continue until no more data are rejected; see weighted_combine`().

Returns:

The combination of all the processed images.

Return type:

PypeItImage