pypeit.coadd3d module

Module containing routines used by 3D datacubes.

class pypeit.coadd3d.CoAdd3D(spec2dfiles, par, skysub_frame=None, scale_corr=None, ra_offsets=None, dec_offsets=None, spectrograph=None, det=None, overwrite=False, show=False, debug=False)[source]

Bases: object

Main routine to convert processed PypeIt spec2d frames into DataCube (spec3d) files. This routine is only used for IFU data reduction.

Algorithm steps are detailed in the coadd routine.

add_grating_corr(flatfile, waveimg, slits, spat_flexure=None)[source]

Calculate the relative spectral sensitivity correction due to grating shifts with the input frames.

Parameters:
  • flatfile (str) – Unique path of a flatfield frame used to calculate the relative spectral sensitivity of the corresponding science frame.

  • waveimg (numpy.ndarray) – 2D image (same shape as the science frame) indicating the wavelength of each detector pixel.

  • slits (SlitTraceSet) – Class containing information about the slits

  • spat_flexure (float, optional:) – Spatial flexure in pixels

check_outputs()[source]

Check if any of the intended output files already exist. This check should be done near the beginning of the coaddition, to avoid any computation that won’t be saved in the event that files won’t be overwritten.

get_current_scalecorr(spec2DObj, scalecorr=None)[source]

Determine the scale correction that should be used to correct for the relative spectral scaling of the science frame

Parameters:
  • spec2DObj (Spec2DObj) – 2D PypeIt spectra object.

  • scalecorr (str, optional) –

    A string that describes what mode should be used for the sky subtraction. The allowed values are:

    • default: Use the default value, as defined in set_default_scalecorr().

    • image: Use the relative scale that was derived from the science frame

    • none: Do not perform relative scale correction

Returns:

Contains (this_scalecorr, relScaleImg) where this_scalecorr is a str that describes the scale correction mode to be used (see scalecorr description) and relScaleImg is a numpy.ndarray (2D, same shape as science frame) containing the relative spectral scaling to apply to the science frame.

Return type:

tuple

get_current_skysub(spec2DObj, exptime, opts_skysub=None)[source]

Determine the sky frame that should be used to subtract from the science frame

Parameters:
  • spec2DObj (Spec2DObj) – 2D PypeIt spectra object.

  • exptime (float) – The exposure time of the science frame (in seconds)

  • opts_skysub (str, optional) –

    A string that describes what mode should be used for the sky subtraction. The allowed values are:

    • default: Use the default value, as defined in set_default_skysub()

    • image: Use the sky model derived from the science frame

    • none: Do not perform sky subtraction

Returns:

Contains (this_skysub, skyImg, skyScl) where this_skysub is a str that describes the sky subtration mode to be used (see opts_skysub description), skyImg is a numpy.ndarray (2D, same shape as science frame) containing the sky frame to be subtracted from the science frame, and skyScl is a numpy.ndarray (2D, same shape as science frame) containing the relative spectral scaling that has been applied to the returned sky frame.

Return type:

tuple

classmethod get_instance(spec2dfiles, par, skysub_frame=None, scale_corr=None, ra_offsets=None, dec_offsets=None, spectrograph=None, det=1, overwrite=False, show=False, debug=False)[source]

Instantiate the subclass appropriate for the provided spectrograph.

The class to instantiate must match the pypeline attribute of the provided spectrograph, and must be a subclass of CoAdd3D; see the parent class instantiation for parameter descriptions.

Returns:

One of the subclasses with CoAdd3D as its base.

Return type:

CoAdd3D

make_sensfunc()[source]

Generate the sensitivity function to be used for the flux calibration.

run()[source]

Main entry routine to set the order of operations to coadd the data. For specific details of this procedure, see the child routines.

set_blaze_spline(wave_spl, spec_spl)[source]

Generate a spline that represents the blaze function. This only needs to be done once, because it is used as the reference blaze. It is only important if you are combining frames that require a grating correction (i.e. have slightly different grating angles).

Parameters:
  • wave_spl (numpy.ndarray) – 1D wavelength array where the blaze has been evaluated

  • spec_spl (numpy.ndarray) – 1D array (same size as wave_spl), that represents the blaze function for each wavelength.

set_default_scalecorr()[source]

Set the default mode to use for relative spectral scale correction.

set_default_skysub()[source]

Set the default mode to use for sky subtraction.

class pypeit.coadd3d.DARcorrection(airmass, parangle, pressure, temperature, humidity, cosdec, wave_ref=4500.0)[source]

Bases: object

This class holds all of the functions needed to quickly compute the differential atmospheric refraction correction.

calculate_dispersion(waves)[source]

Calculate the total atmospheric dispersion relative to the reference wavelength

Parameters:

waves (numpy.ndarray) – 1D array of wavelengths (units must be Angstroms)

Returns:

full_dispersion – The atmospheric dispersion (in degrees) for each wavelength input

Return type:

float

correction(waves)[source]

Main routine that computes the DAR correction for both right ascension and declination.

Parameters:

waves (numpy.ndarray) – 1D array of wavelengths (units must be Angstroms)

Returns:

  • ra_corr (numpy.ndarray) – The RA component of the atmospheric dispersion correction (in degrees) for each wavelength input.

  • dec_corr (numpy.ndarray) – The Dec component of the atmospheric dispersion correction (in degrees) for each wavelength input.

class pypeit.coadd3d.DataCube(flux, sig, bpm, wave, PYP_SPEC, blaze_wave, blaze_spec, sensfunc=None, fluxed=None)[source]

Bases: DataContainer

DataContainer to hold the products of a datacube

The datamodel attributes are:

Version: 1.2.0

Attribute

Type

Array Type

Description

PYP_SPEC

str

PypeIt: Spectrograph name

blaze_spec

numpy.ndarray

numpy.floating

The spectral blaze function

blaze_wave

numpy.ndarray

numpy.floating

Wavelength array of the spectral blaze function

bpm

numpy.ndarray

numpy.uint8

Bad pixel mask of the datacube (0=good, 1=bad)

flux

numpy.ndarray

numpy.floating

Flux datacube in units of counts/s/Ang/arcsec^2 or 10^-17 erg/s/cm^2/Ang/arcsec^2

fluxed

bool

Boolean indicating if the datacube is fluxed.

sensfunc

numpy.ndarray

numpy.floating

Sensitivity function 10^-17 erg/(counts/cm^2)

sig

numpy.ndarray

numpy.floating

Error datacube (matches units of flux)

wave

numpy.ndarray

numpy.floating

Wavelength of each slice in the spectral direction. The units are Angstroms.

Parameters:
  • flux (numpy.ndarray) – The science datacube (nwave, nspaxel_y, nspaxel_x)

  • sig (numpy.ndarray) – The error datacube (nwave, nspaxel_y, nspaxel_x)

  • bpm (numpy.ndarray) – The bad pixel mask of the datacube (nwave, nspaxel_y, nspaxel_x). True values indicate a bad pixel

  • wave (numpy.ndarray) – A 1D numpy array containing the wavelength array for convenience (nwave)

  • blaze_wave (numpy.ndarray) – Wavelength array of the spectral blaze function

  • blaze_spec (numpy.ndarray) – The spectral blaze function

  • sensfunc (numpy.ndarray, None) – Sensitivity function (nwave,). Only saved if the data are fluxed.

  • PYP_SPEC (str) – Name of the PypeIt Spectrograph

  • fluxed (bool) – If the cube has been flux calibrated, this will be set to “True”

head0

Primary header

Type:

astropy.io.fits.Header

filename

Filename to use when loading from file

Type:

str

spect_meta

Parsed meta from the header

Type:

dict

spectrograph

Build from PYP_SPEC

Type:

Spectrograph

_ivar

Build from PYP_SPEC

Type:

Spectrograph

_wcs

Build from PYP_SPEC

Type:

Spectrograph

_bundle()[source]

Over-write default _bundle() method to separate the DetectorContainer into its own HDU

Returns:

A list of dictionaries, each list element is written to its own fits extension. See the description above.

Return type:

list

datamodel = {'PYP_SPEC': {'descr': 'PypeIt: Spectrograph name', 'otype': <class 'str'>}, 'blaze_spec': {'atype': <class 'numpy.floating'>, 'descr': 'The spectral blaze function', 'otype': <class 'numpy.ndarray'>}, 'blaze_wave': {'atype': <class 'numpy.floating'>, 'descr': 'Wavelength array of the spectral blaze function', 'otype': <class 'numpy.ndarray'>}, 'bpm': {'atype': <class 'numpy.uint8'>, 'descr': 'Bad pixel mask of the datacube (0=good, 1=bad)', 'otype': <class 'numpy.ndarray'>}, 'flux': {'atype': <class 'numpy.floating'>, 'descr': 'Flux datacube in units of counts/s/Ang/arcsec^2 or 10^-17 erg/s/cm^2/Ang/arcsec^2', 'otype': <class 'numpy.ndarray'>}, 'fluxed': {'descr': 'Boolean indicating if the datacube is fluxed.', 'otype': <class 'bool'>}, 'sensfunc': {'atype': <class 'numpy.floating'>, 'descr': 'Sensitivity function 10^-17 erg/(counts/cm^2)', 'otype': <class 'numpy.ndarray'>}, 'sig': {'atype': <class 'numpy.floating'>, 'descr': 'Error datacube (matches units of flux)', 'otype': <class 'numpy.ndarray'>}, 'wave': {'atype': <class 'numpy.floating'>, 'descr': 'Wavelength of each slice in the spectral direction. The units are Angstroms.', 'otype': <class 'numpy.ndarray'>}}

Provides the class data model. This is undefined in the abstract class and should be overwritten in the derived classes.

The format of the datamodel needed for each implementation of a DataContainer derived class is as follows.

The datamodel itself is a class attribute (i.e., it is a member of the class, not just of an instance of the class). The datamodel is a dictionary of dictionaries: Each key of the datamodel dictionary provides the name of a given datamodel element, and the associated item (dictionary) for the datamodel element provides the type and description information for that datamodel element. For each datamodel element, the dictionary item must provide:

  • otype: This is the type of the object for this datamodel item. E.g., for a float or a numpy.ndarray, you would set otype=float and otype=np.ndarray, respectively.

  • descr: This provides a text description of the datamodel element. This is used to construct the datamodel tables in the pypeit documentation.

If the object type is a numpy.ndarray, you should also provide the atype keyword that sets the type of the data contained within the array. E.g., for a floating point array containing an image, your datamodel could be simply:

datamodel = {'image' : dict(otype=np.ndarray, atype=float, descr='My image')}

More advanced examples are given in the top-level module documentation.

Currently, datamodel components are restricted to have otype that are tuple, int, float, numpy.integer, numpy.floating, numpy.ndarray, or astropy.table.Table objects. E.g., datamodel values for otype cannot be dict.

classmethod from_file(ifile, verbose=True, chk_version=True, **kwargs)[source]

Instantiate the object from an extension in the specified fits file.

Over-load from_file() to deal with the header

Parameters:
  • ifile (str, Path) – Fits file with the data to read

  • verbose (bool, optional) – Print informational messages (not currently used)

  • chk_version (bool, optional) – Passed to from_hdu().

  • kwargs (dict, optional) – Arguments passed directly to from_hdu().

internals = ['head0', 'filename', 'spectrograph', 'spect_meta', '_ivar', '_wcs']

A list of strings identifying a set of internal attributes that are not part of the datamodel.

property ivar

Utility function to compute the inverse variance cube

Returns:

self._ivar – The inverse variance of the datacube. Note that self._ivar should not be accessed directly, and you should only call self.ivar

Return type:

numpy.ndarray

to_file(ofile, primary_hdr=None, hdr=None, **kwargs)[source]

Over-load to_file() to deal with the header

Parameters:
  • ofile (str) – Filename

  • primary_hdr (astropy.io.fits.Header, optional) – Base primary header. Updated with new subheader keywords. If None, initialized using initialize_header().

  • wcs (astropy.io.fits.Header, optional) – The World Coordinate System, represented by a fits header

  • kwargs (dict) – Keywords passed directly to parent to_file function.

version = '1.2.0'

Provides the string representation of the class version.

This is currently put to minimal use so far, but will used for I/O verification in the future.

Each derived class should provide a version to guard against data model changes during development.

property wcs

Utility function to provide the world coordinate system of the datacube

Returns:

self._wcs – The WCS based on the stored header information. Note that self._wcs should not be accessed directly, and you should only call self.wcs

Return type:

astropy.wcs.WCS

class pypeit.coadd3d.SlicerIFUCoAdd3D(spec2dfiles, par, skysub_frame=None, scale_corr=None, ra_offsets=None, dec_offsets=None, spectrograph=None, det=1, overwrite=False, show=False, debug=False)[source]

Bases: CoAdd3D

Child of CoAdd3D for SlicerIFU data reduction. For documentation, see CoAdd3d parent class above.

This child class of the IFU datacube creation performs the series of steps that are specific to slicer-based IFUs, including the following steps

Data preparation:

  • Loads individual spec2d files

  • If requested, subtract the sky (either from a dedicated sky frame, or use the sky model stored in the science spec2d file)

  • The sky regions near the spectral edges of the slits are masked

  • Apply a relative spectral illumination correction (scalecorr) that registers all input frames to the scale illumination.

  • Generate a WCS of each individual frame, and calculate the RA and DEC of each individual detector pixel

  • Calculate the astrometric correction that is needed to align spatial positions along the slices

  • Compute the differential atmospheric refraction correction

  • Apply the extinction correction

  • Apply a grating correction (gratcorr) - This corrects for the relative spectral efficiency of combining data taken with multiple different grating angles

  • Flux calibrate

Data cube generation:

  • If frames are not being combined, individual data cubes are generated and saved as a DataCube object. A white light image is also produced, if requested

  • If frames are being aligned and/or combined, the following steps are followed:
    • The output voxel sampling is computed (this must be consistent for all frames)

    • Frames are aligned (either by user-specified offsets, or by a fancy cross-correlation)

    • The relative weights to each for each detector pixel is computed

    • If frames are not being combined, individual DataCube’s will be generated for each frame

    • If frames are being combined, a single DataCube will be generated.

    • White light images are also produced, if requested.

compute_weights()[source]

Compute the relative weights to apply to pixels that are collected into the voxels of the output DataCubes

Returns:

The individual pixel weights for each detector pixel, and every frame.

Return type:

numpy.ndarray

get_alignments(spec2DObj, slits, spat_flexure=None)[source]

Generate and return the spline interpolation fitting functions to be used for the alignment frames, as part of the astrometric correction.

Parameters:
  • spec2DObj (Spec2DObj) – 2D PypeIt spectra object.

  • slits (SlitTraceSet) – Class containing information about the slits

  • spat_flexure (float, optional) – Spatial flexure in pixels

Returns:

alignSplines – Alignment splines used for the astrometric correction

Return type:

AlignmentSplines

load()[source]

This is the main function that loads in the data, and performs several frame-specific corrections. If the user does not wish to align or combine the individual datacubes, then this routine will also produce a spec3d file, which is a DataCube representation of a PypeIt spec2d frame for SlicerIFU data.

This function should be called in the __init__ method, and initialises multiple variables. The variables initialised by this function include:

  • self.ifu_ra - The RA of the IFU pointing

  • self.ifu_dec - The Dec of the IFU pointing

  • self.mnmx_wv - The minimum and maximum wavelengths of every slit and frame.

  • self._spatscale - The native spatial scales of all spec2d frames.

  • self._specscale - The native spectral scales of all spec2d frames.

  • self.weights - Weights to use when combining cubes

  • self.flat_splines - Spline representations of the blaze function (based on the illumflat).

  • self.blaze_spline - Spline representation of the reference blaze function

  • self.blaze_wave - Wavelength array used to construct the reference blaze function

  • self.blaze_spec - Spectrum used to construct the reference blaze function

As well as the primary arrays that store the pixel information for multiple spec2d frames, including:

  • self.all_sci

  • self.all_ivar

  • self.all_wave

  • self.all_slitid

  • self.all_wghts

  • self.all_tilts

  • self.all_slits

  • self.all_align

  • self.all_wcs

  • self.all_ra

  • self.all_dec

  • self.all_dar

run()[source]

This is the main routine called to convert PypeIt spec2d files into PypeIt DataCube objects. It is specific to the SlicerIFU data.

First the data are loaded and several corrections are made. These include:

  • A sky frame or model is subtracted from the science data, and the relative spectral illumination of different slices is corrected.

  • A mask of good pixels is identified

  • A common spaxel scale is determined, and the astrometric correction is derived

  • An RA and Dec image is created for each pixel.

  • Based on atmospheric conditions, a differential atmospheric refraction correction is applied.

  • Extinction correction

  • Flux calibration (optional - this calibration is only applied if a standard star cube is supplied)

If the input frames will not be combined (combine=False) if they won’t be aligned (align=False), then each individual spec2d file is converted into a spec3d file (i.e. a PypeIt DataCube object). These fits files can be loaded/viewed in other software packages to display or combine multiple datacubes into a single datacube. However, note that different software packages use combination algorithms that may not conserve flux, or may produce covariance between adjacent voxels.

If the user wishes to either spatially align multiple exposures (align=True) or combine multiple exposures (combine=True), then the next set of operations include:

  • Generate white light images of each individual cube (according to a user-specified wavelength range)

  • Align multiple frames if align=True (either manually by user input, or automatically by cross-correlation)

  • Create the output WCS, and apply the flux calibration to the data

  • Generate individual datacubes (combine=False) or one master datacube containing all exposures (combine=True). Note, there are several algorithms used to combine multiple frames. Refer to the subpixellate() routine for more details about the combination options.

run_align()[source]

This routine aligns multiple cubes by using manual input offsets or by cross-correlating white light images.

Returns:

A new set of RA values that have been aligned numpy.ndarray: A new set of Dec values that has been aligned

Return type:

numpy.ndarray