pypeit.images.rawimage module

Object to load and process a single raw image

class pypeit.images.rawimage.RawImage(ifile, spectrograph, det)[source]

Bases: object

Class to load and process raw images.

Generally speaking, this class should only be used as follows:

# Load the raw data and prepare the object
rawImage = RawImage(file, spectrograph, det)
pypeitImage = rawImage.process(par)

modulo details of the keyword arguments in process(). The class has many methods that handle each step of the processing but the order of these steps matters, meaning they’re not guaranteed to succeed if done out of order. This is most relevant when processing multiple detector images into an image mosaic; see process().

Parameters:
  • ifile (str) – File with the data.

  • spectrograph (Spectrograph) – The spectrograph from which the data was collected.

  • det (int, tuple) – 1-indexed detector(s) to read. An image mosaic is selected using a tuple with the detectors in the mosaic, which must be one of the allowed mosaics returned by allowed_mosaics().

filename

Original file name with the data.

Type:

str

spectrograph

Spectrograph instance with the instrument-specific properties and methods.

Type:

Spectrograph

det

1-indexed detector number(s); see class argument.

Type:

int, tuple

detector

Mosaic/Detector characteristics

Type:

DetectorContainer, Mosaic

rawimage

The raw, not trimmed or reoriented, image data for the detector(s).

Type:

numpy.ndarray

hdu

The full list of HDUs provided by filename.

Type:

astropy.io.fits.HDUList

exptime

Frame exposure time in seconds.

Type:

float

rawdatasec_img

The original, not trimmed or reoriented, image identifying which amplifier was used to read each section of the raw image.

Type:

numpy.ndarray

oscansec_img

The original, not trimmed or reoriented, image identifying the overscan regions in the raw image read by each amplifier.

Type:

numpy.ndarray

headarr

A list of astropy.io.fits.Header objects with the headers for all extensions in hdu.

Type:

list

image

The processed image. This starts as identical to rawimage and then altered by the processing steps; see process().

Type:

numpy.ndarray

ronoise

The readnoise (in e-/ADU) for each of the detector amplifiers.

Type:

list

par

Parameters that dictate the processing of the images.

Type:

ProcessImagesPar

ivar

The inverse variance of image, the processed image.

Type:

numpy.ndarray

rn2img

The readnoise variance image.

Type:

numpy.ndarray

proc_var

The sum of the variance components added by the image processing; i.e., the error in the overscan subtraction, bias subtraction, etc.

Type:

numpy.ndarray

base_var

The base-level variance in the processed image. See base_variance().

Type:

numpy.ndarray

var

The aggregate variance in the processed image. This is the primary array used during process() to track uncertainties; ivar is created by inverting this at the end of the processing method.

Type:

numpy.ndarray

steps

Dictionary containing a set of booleans that track the processing steps that have been performed.

Type:

dict

datasec_img

Image identifying which amplifier was used to read each section of the processed image.

Type:

numpy.ndarray

spat_flexure_shift

The spatial flexure shift in pixels, if calculated

Type:

float

_squeeze()[source]

Convenience method for preparing attributes for construction of a PypeItImage.

The issue is that RawImage image arrays are always 3D, even if there’s only one image. This is acceptable because use of RawImage is relatively self-contained. It’s really a namespace used for the image processing that disappears as soon as the image processing is done.

PypeItImage, on the other hand, is a core class that is shared by many subclasses and used throughout the code base, meaning that it doesn’t make sense to keep single images in 3D arrays.

This method “squeezes” (see numpy.squeeze) the arrays used to construct a PypeItImage so that they are 3D only if they have to be.

Returns:

Returns the pypeit.images.detector_container.DetectorContainer or pypeit.images.mosaic.Mosaic instance, and the reshaped arrays with the image flux, inverse variance, amplifier number, detector number, readnoise-squared image, base-level variance, image scaling factor, and bad-pixel mask.

Return type:

tuple

apply_gain(force=False)[source]

Use the gain to convert images from ADUs to electrons/counts.

Conversion applied to image, :attr:`var, and rn2img.

Parameters:

force (bool, optional) – Force the gain to be applied to the image, even if the step log (steps) indicates that it already has been.

property bpm

Generate and return the bad pixel mask for this image.

Warning

BPMs are for processed (e.g. trimmed, rotated) images only!

Returns:

Bad pixel mask with a bad pixel = 1

Return type:

numpy.ndarray

build_dark(dark_image=None, expscale=False)[source]

Build the dark image data used for dark subtraction and error propagation.

If a dark image is not provided, the dark image is simply the tabulated value and the error is set to None. Otherwise, the dark is the combination of the tabulated dark-current for the detector and a dark image. For this to be appropriate, the dark image (if provided) must also have had the tabulated dark-current value subtracted from it.

Also, the processing of the dark image (if provided) should match the processing of the image being processed. For example, if this image has been bias subtracted, so should be the dark image.

If the dark_image object includes an inverse variance estimate, this is used to set the dark-current error.

Warning

Typically dark frames should have the same exposure time as the image being processed. However, beware if that’s not the case, and make sure any use of exposure time scaling of the counts (see expscale) is appropriate!

Parameters:
  • dark_image (PypeItImage, optional) – The observed dark image in counts (not counts/s). If None, only the tabulated dark-current are used to construct the dark image(s).

  • expscale (bool, optional) – Scale the dark image (if provided) by the ratio of the exposure times so that the counts per second represented by the dark image are correct.

build_ivar()[source]

Generate the inverse variance in the image.

This is a simple wrapper for base_variance() and variance_model().

Returns:

The inverse variance in the image.

Return type:

numpy.ndarray

build_mosaic()[source]

When processing multiple detectors, this remaps the detector data to a mosaic.

This is largely a wrapper for multiple calls to build_image_mosaic(). Resampling is currently restricted to nearest grid-point interpolation (order=0).

Construction of the mosaic(s) must be done after the images have been trimmed and oriented to follow the PypeIt convention.

This function remaps image, datasec_img, rn2img, dark, dark_var, proc_var, and base_var. These are all originally calculated in the native detector frame. Because img_scale is only related to the flat-field images, it is not remapped because these images are always processed in the mosaic frame.

build_rn2img(units='e-', digitization=False)[source]

Generate the model readnoise variance image (rn2img).

This is primarily a wrapper for rn2_frame().

Parameters:
  • units (str, optional) – Units for the output variance. Options are 'e-' for variance in square electrons (counts) or 'ADU' for square ADU.

  • digitization (bool, optional) – Include digitization error in the calculation.

Returns:

Readnoise variance image.

Return type:

numpy.ndarray

estimate_readnoise()[source]

Estimate the readnoise (in electrons) based on the overscan regions of the image.

If the readnoise is not known for any of the amplifiers (i.e., if ronoise is \(\leq 0\)) or if explicitly requested using the empirical_rn parameter, the function estimates it using the standard deviation in the overscan region.

Warning

This function edits ronoise in place.

flatfield(flatimages, slits=None, force=False, debug=False)[source]

Field flatten the processed image.

This method uses the results of the flat-field modeling code (see FlatField) and any measured spatial shift due to flexure to construct slit-illumination, spectral response, and pixel-to-pixel response corrections, and multiplicatively removes them from the current image. If available, the calculation is propagated to the variance image; however, no uncertainty in the flat-field corrections are included.

Warning

If you want the spatial flexure to be accounted for, you must first calculate the shift using spatial_flexure_shift().

Parameters:
  • flatimages (FlatImages) – Flat-field images used to apply flat-field corrections.

  • slits (SlitTraceSet, optional) – Used to construct the slit illumination profile, and only required if this is to be calculated and normalized out. See fit2illumflat().

  • force (bool, optional) – Force the image to be field flattened, even if the step log (steps) indicates that it already has been.

  • debug (bool, optional) – Run in debug mode.

Returns:

Returns a boolean array flagging pixels were the total applied flat-field value (i.e., the combination if the pixelflat and illumination corrections) was <=0.

Return type:

numpy.ndarray

orient(force=False)[source]

Orient image attributes such that they follow the PypeIt convention with spectra running blue (down) to red (up) and with orders decreasing from high (left) to low (right).

This edits image, rn2img (if it exists), proc_var (if it exists), and datasec_img in place.

Parameters:

force (bool, optional) – Force the image to be re-oriented, even if the step log (steps) indicates that it already has been.

process(par, bpm=None, scattlight=None, flatimages=None, bias=None, slits=None, dark=None, mosaic=False, debug=False)[source]

Process the data.

See further discussion of Basic Image Processing in PypeIt.

The processing steps used (depending on the parameter toggling in par), in the order they will be applied are:

  1. apply_gain(): The first step is to convert the image units from ADU to electrons, amp by amp, using the gain provided by the DetectorContainer instance(s) for each Spectrograph subclass.

  2. subtract_pattern(): Analyze and subtract sinusoidal pattern noise from the image; see subtract_pattern().

  3. build_rn2img(): Construct the readnoise variance image, which includes readnoise and digitization error. If any of the amplifiers on the detector do not have a measured readnoise or if explicitly requested using the empirical_rn parameter, the readnoise is estimated using estimate_readnoise().

  4. subtract_overscan(): Use the detector overscan region to measure and subtract the frame-dependent bias level along the readout direction.

  5. trim(): Trim the image to include the data regions only (i.e. remove the overscan).

  6. orient(): Orient the image in the PypeIt orientation — spectral coordinates ordered along the first axis and spatial coordinates ordered along the second, (spec, spat) — with blue to red going from small pixel numbers to large.

  7. subtract_bias(): Subtract the processed bias image. The shape and orientation of the bias image must match the processed image. I.e., if you trim and orient this image, you must also have trimmed and oriented the bias frames.

  8. build_dark(): Create dark-current images using both the tabulated dark-current value for each detector and any directly observed dark images. The shape and orientation of the observed dark image must match the processed image. I.e., if you trim and orient this image, you must also have trimmed and oriented the dark frames. To scale the dark image by the ratio of the exposure times to ensure the counts/s in the dark are removed from the image being processed, set the dark_expscale parameter to true.

  9. subtract_dark(): Subtract the processed dark image and propagate any error.

  10. build_mosaic(): If data from multiple detectors are being processed as components of a detector mosaic, this resamples the individual images into a single image mosaic. The current “resampling” scheme is restricted to nearest grid-point interpolation; see . The placement of this step is important in that all of the previous corrections (overscan, trim, orientation, bias- and dark-subtraction) are done on the individual detector images. However, after this point, we potentially need the slits and flat-field images which are only defined in the mosaic frame. Because of this, bias and dark frames should never be reformatted into a mosaic.

  11. spatial_flexure_shift(): Measure any spatial shift due to flexure.

  12. subtract_scattlight(): Generate a model of the scattered light contribution and subtract it.

  13. flatfield(): Divide by the pixel-to-pixel, spatial and spectral response functions.

  14. build_ivar(): Construct a model estimate of the variance in the image based in the readnoise, errors from the additive processing steps, shot-noise from the observed counts (see the shot_noise parameter), a rescaling due to the flat-field correction, and a noise floor that sets a maximum S/N per pixel (see the noise_floor parameter); see variance_model().

  15. build_crmask(): Generate a cosmic-ray mask

Parameters:
  • par (ProcessImagesPar) – Parameters that dictate the processing of the images. See pypeit.par.pypeitpar.ProcessImagesPar for the defaults.

  • bpm (numpy.ndarray, optional) – The bad-pixel mask. This is used to overwrite the default bad-pixel mask for this spectrograph. The shape must match a trimmed and oriented processed image.

  • scattlight (ScatteredLight, optional) – Scattered light model to be used to determine scattered light.

  • flatimages (FlatImages, optional) – Flat-field images used to apply flat-field corrections.

  • bias (PypeItImage, optional) – Bias image for bias subtraction.

  • slits (SlitTraceSet, optional) – Used to calculate spatial flexure between the image and the slits, if requested via the spat_flexure_correct parameter in par; see spat_flexure_shift(). Also used to construct the slit illumination profile, if requested via the use_illumflat parameter in par; see fit2illumflat().

  • dark (PypeItImage) – Dark image

  • mosaic (bool, optional) – When processing multiple detectors, resample the images into a mosaic. If flats or slits are provided (and used), this must be true because these objects are always defined in the mosaic frame.

  • debug (bool, optional) – Run in debug mode.

Returns:

The processed image.

Return type:

PypeItImage

property shape
spatial_flexure_shift(slits, force=False)[source]

Calculate a spatial shift in the edge traces due to flexure.

This is a simple wrapper for spat_flexure_shift().

Parameters:
  • slits (SlitTraceSet, optional) – Slit edge traces

  • force (bool, optional) – Force the image to be field flattened, even if the step log (steps) indicates that it already has been.

Returns:

The calculated flexure correction

Return type:

float

subtract_bias(bias_image, force=False)[source]

Subtract a bias image.

If the bias_image object includes an inverse variance image and if var is available, the error in the bias is propagated to the bias-subtracted image.

Parameters:
  • bias_image (PypeItImage) – Bias image

  • force (bool, optional) – Force the image to be subtracted, even if the step log (steps) indicates that it already has been.

subtract_continuum(force=False)[source]

Subtract the continuum level from the image.

Parameters:

force (bool, optional) – Force the continuum to be subtracted, even if the step log (steps) indicates that it already has been.

subtract_dark(force=False)[source]

Subtract detector dark current.

The dark and dark_ivar arrays must have already been constructed using build_dark(). If they aren’t, a warning is thrown and nothing is done.

Parameters:

force (bool, optional) – Force the dark to be subtracted, even if the step log (steps) indicates that it already has been.

subtract_overscan(force=False)[source]

Analyze and subtract the overscan from the image

If this is a mosaic, loop over the individual detectors

Parameters:

force (bool, optional) – Force the image to be overscan subtracted, even if the step log (steps) indicates that it already has been.

subtract_pattern()[source]

Analyze and subtract the pattern noise from the image.

This is primarily a wrapper for subtract_pattern().

subtract_scattlight(msscattlight, slits, debug=False)[source]

Analyze and subtract the scattered light from the image.

This is primarily a wrapper for scattered_light_model().

Parameters:
  • msscattlight (ScatteredLight) – Scattered light calibration frame

  • slits (SlitTraceSet) – Slit edge information

  • debug (bool, optional) – If True, debug the computed scattered light image

trim(force=False)[source]

Trim image attributes to include only the science data.

This edits image, rn2img (if it exists), proc_var (if it exists), and datasec_img in place.

Parameters:

force (bool, optional) – Force the image to be trimmed, even if the step log (steps) indicates that it already has been.

property use_flat

Return a flag setting if the flat data should be used in the image processing.

property use_slits

Return a flag setting if the slit-edge traces should be used in the image processing. The slits are required if a spatial flexure correction is requested and/or when the slit-illumination profile is removed.