pypeit.coadd3d module
Module containing routines used by 3D datacubes.
- class pypeit.coadd3d.CoAdd3D(spec2dfiles, par, skysub_frame=None, sensfile=None, scale_corr=None, grating_corr=None, ra_offsets=None, dec_offsets=None, spectrograph=None, det=None, overwrite=False, show=False, debug=False)[source]
Bases:
object
Main routine to convert processed PypeIt spec2d frames into DataCube (spec3d) files. This routine is only used for IFU data reduction.
Algorithm steps are detailed in the coadd routine.
- add_grating_corr(flatfile, waveimg, slits, spat_flexure=None)[source]
Calculate the relative spectral sensitivity correction due to grating shifts with the input frames.
- Parameters:
flatfile (
str
) – Unique path of a flatfield frame used to calculate the relative spectral sensitivity of the corresponding science frame.waveimg (numpy.ndarray) – 2D image (same shape as the science frame) indicating the wavelength of each detector pixel.
slits (
SlitTraceSet
) – Class containing information about the slitsspat_flexure (
float
, optional:) – Spatial flexure in pixels
- check_outputs()[source]
Check if any of the intended output files already exist. This check should be done near the beginning of the coaddition, to avoid any computation that won’t be saved in the event that files won’t be overwritten.
- get_current_scalecorr(spec2DObj, scalecorr=None)[source]
Determine the scale correction that should be used to correct for the relative spectral scaling of the science frame
- Parameters:
spec2DObj (
Spec2DObj
) – 2D PypeIt spectra object.scalecorr (
str
, optional) –A string that describes what mode should be used for the sky subtraction. The allowed values are:
default: Use the default value, as defined in
set_default_scalecorr()
.image: Use the relative scale that was derived from the science frame
none: Do not perform relative scale correction
- Returns:
Contains (this_scalecorr, relScaleImg) where this_scalecorr is a
str
that describes the scale correction mode to be used (see scalecorr description) and relScaleImg is a numpy.ndarray (2D, same shape as science frame) containing the relative spectral scaling to apply to the science frame.- Return type:
- get_current_skysub(spec2DObj, exptime, opts_skysub=None)[source]
Determine the sky frame that should be used to subtract from the science frame
- Parameters:
spec2DObj (
Spec2DObj
) – 2D PypeIt spectra object.exptime (
float
) – The exposure time of the science frame (in seconds)opts_skysub (
str
, optional) –A string that describes what mode should be used for the sky subtraction. The allowed values are:
default: Use the default value, as defined in
set_default_skysub()
image: Use the sky model derived from the science frame
none: Do not perform sky subtraction
- Returns:
Contains (this_skysub, skyImg, skyScl) where this_skysub is a
str
that describes the sky subtration mode to be used (see opts_skysub description), skyImg is a numpy.ndarray (2D, same shape as science frame) containing the sky frame to be subtracted from the science frame, and skyScl is a numpy.ndarray (2D, same shape as science frame) containing the relative spectral scaling that has been applied to the returned sky frame.- Return type:
- classmethod get_instance(spec2dfiles, par, skysub_frame=None, sensfile=None, scale_corr=None, grating_corr=None, ra_offsets=None, dec_offsets=None, spectrograph=None, det=1, overwrite=False, show=False, debug=False)[source]
Instantiate the subclass appropriate for the provided spectrograph.
The class to instantiate must match the
pypeline
attribute of the providedspectrograph
, and must be a subclass ofCoAdd3D
; see the parent class instantiation for parameter descriptions.
- run()[source]
Main entry routine to set the order of operations to coadd the data. For specific details of this procedure, see the child routines.
- set_blaze_spline(wave_spl, spec_spl)[source]
Generate a spline that represents the blaze function. This only needs to be done once, because it is used as the reference blaze. It is only important if you are combining frames that require a grating correction (i.e. have slightly different grating angles).
- Parameters:
wave_spl (numpy.ndarray) – 1D wavelength array where the blaze has been evaluated
spec_spl (numpy.ndarray) – 1D array (same size as wave_spl), that represents the blaze function for each wavelength.
- class pypeit.coadd3d.DARcorrection(airmass, parangle, pressure, temperature, humidity, cosdec, wave_ref=4500.0)[source]
Bases:
object
This class holds all of the functions needed to quickly compute the differential atmospheric refraction correction.
- calculate_dispersion(waves)[source]
Calculate the total atmospheric dispersion relative to the reference wavelength
- Parameters:
waves (numpy.ndarray) – 1D array of wavelengths (units must be Angstroms)
- Returns:
full_dispersion – The atmospheric dispersion (in degrees) for each wavelength input
- Return type:
- correction(waves)[source]
Main routine that computes the DAR correction for both right ascension and declination.
- Parameters:
waves (numpy.ndarray) – 1D array of wavelengths (units must be Angstroms)
- Returns:
ra_corr (numpy.ndarray) – The RA component of the atmospheric dispersion correction (in degrees) for each wavelength input.
dec_corr (numpy.ndarray) – The Dec component of the atmospheric dispersion correction (in degrees) for each wavelength input.
- class pypeit.coadd3d.DataCube(flux, sig, bpm, wave, PYP_SPEC, blaze_wave, blaze_spec, sensfunc=None, fluxed=None)[source]
Bases:
DataContainer
DataContainer to hold the products of a datacube
The datamodel attributes are:
Version: 1.2.0
Attribute
Type
Array Type
Description
PYP_SPEC
str
PypeIt: Spectrograph name
blaze_spec
The spectral blaze function
blaze_wave
Wavelength array of the spectral blaze function
bpm
Bad pixel mask of the datacube (0=good, 1=bad)
flux
Flux datacube in units of counts/s/Ang/arcsec^2 or 10^-17 erg/s/cm^2/Ang/arcsec^2
fluxed
bool
Boolean indicating if the datacube is fluxed.
sensfunc
Sensitivity function 10^-17 erg/(counts/cm^2)
sig
Error datacube (matches units of flux)
wave
Wavelength of each slice in the spectral direction. The units are Angstroms.
- Parameters:
flux (numpy.ndarray) – The science datacube (nwave, nspaxel_y, nspaxel_x)
sig (numpy.ndarray) – The error datacube (nwave, nspaxel_y, nspaxel_x)
bpm (numpy.ndarray) – The bad pixel mask of the datacube (nwave, nspaxel_y, nspaxel_x). True values indicate a bad pixel
wave (numpy.ndarray) – A 1D numpy array containing the wavelength array for convenience (nwave)
blaze_wave (numpy.ndarray) – Wavelength array of the spectral blaze function
blaze_spec (numpy.ndarray) – The spectral blaze function
sensfunc (numpy.ndarray, None) – Sensitivity function (nwave,). Only saved if the data are fluxed.
PYP_SPEC (str) – Name of the PypeIt Spectrograph
fluxed (bool) – If the cube has been flux calibrated, this will be set to “True”
- head0
Primary header
- Type:
- spectrograph
Build from PYP_SPEC
- Type:
- _ivar
Build from PYP_SPEC
- Type:
- _wcs
Build from PYP_SPEC
- Type:
- _bundle()[source]
Over-write default _bundle() method to separate the DetectorContainer into its own HDU
- Returns:
A list of dictionaries, each list element is written to its own fits extension. See the description above.
- Return type:
- datamodel = {'PYP_SPEC': {'descr': 'PypeIt: Spectrograph name', 'otype': <class 'str'>}, 'blaze_spec': {'atype': <class 'numpy.floating'>, 'descr': 'The spectral blaze function', 'otype': <class 'numpy.ndarray'>}, 'blaze_wave': {'atype': <class 'numpy.floating'>, 'descr': 'Wavelength array of the spectral blaze function', 'otype': <class 'numpy.ndarray'>}, 'bpm': {'atype': <class 'numpy.uint8'>, 'descr': 'Bad pixel mask of the datacube (0=good, 1=bad)', 'otype': <class 'numpy.ndarray'>}, 'flux': {'atype': <class 'numpy.floating'>, 'descr': 'Flux datacube in units of counts/s/Ang/arcsec^2 or 10^-17 erg/s/cm^2/Ang/arcsec^2', 'otype': <class 'numpy.ndarray'>}, 'fluxed': {'descr': 'Boolean indicating if the datacube is fluxed.', 'otype': <class 'bool'>}, 'sensfunc': {'atype': <class 'numpy.floating'>, 'descr': 'Sensitivity function 10^-17 erg/(counts/cm^2)', 'otype': <class 'numpy.ndarray'>}, 'sig': {'atype': <class 'numpy.floating'>, 'descr': 'Error datacube (matches units of flux)', 'otype': <class 'numpy.ndarray'>}, 'wave': {'atype': <class 'numpy.floating'>, 'descr': 'Wavelength of each slice in the spectral direction. The units are Angstroms.', 'otype': <class 'numpy.ndarray'>}}
Provides the class data model. This is undefined in the abstract class and should be overwritten in the derived classes.
The format of the
datamodel
needed for each implementation of aDataContainer
derived class is as follows.The datamodel itself is a class attribute (i.e., it is a member of the class, not just of an instance of the class). The datamodel is a dictionary of dictionaries: Each key of the datamodel dictionary provides the name of a given datamodel element, and the associated item (dictionary) for the datamodel element provides the type and description information for that datamodel element. For each datamodel element, the dictionary item must provide:
otype
: This is the type of the object for this datamodel item. E.g., for a float or a numpy.ndarray, you would setotype=float
andotype=np.ndarray
, respectively.descr
: This provides a text description of the datamodel element. This is used to construct the datamodel tables in the pypeit documentation.
If the object type is a numpy.ndarray, you should also provide the
atype
keyword that sets the type of the data contained within the array. E.g., for a floating point array containing an image, your datamodel could be simply:datamodel = {'image' : dict(otype=np.ndarray, atype=float, descr='My image')}
More advanced examples are given in the top-level module documentation.
Currently,
datamodel
components are restricted to haveotype
that aretuple
,int
,float
,numpy.integer
,numpy.floating
, numpy.ndarray, or astropy.table.Table objects. E.g.,datamodel
values forotype
cannot bedict
.
- extract_spec(parset, outname=None, boxcar_radius=None, overwrite=False)[source]
Extract a spectrum from the datacube
- classmethod from_file(ifile, verbose=True, chk_version=True, **kwargs)[source]
Instantiate the object from an extension in the specified fits file.
Over-load
from_file()
to deal with the header
- internals = ['head0', 'filename', 'spectrograph', 'spect_meta', '_ivar', '_wcs']
A list of strings identifying a set of internal attributes that are not part of the datamodel.
- property ivar
Utility function to compute the inverse variance cube
- Returns:
self._ivar – The inverse variance of the datacube. Note that self._ivar should not be accessed directly, and you should only call self.ivar
- Return type:
- to_file(ofile, primary_hdr=None, hdr=None, **kwargs)[source]
Over-load
to_file()
to deal with the header- Parameters:
ofile (
str
) – Filenameprimary_hdr (astropy.io.fits.Header, optional) – Base primary header. Updated with new subheader keywords. If None, initialized using
initialize_header()
.wcs (astropy.io.fits.Header, optional) – The World Coordinate System, represented by a fits header
kwargs (dict) – Keywords passed directly to parent
to_file
function.
- version = '1.2.0'
Provides the string representation of the class version.
This is currently put to minimal use so far, but will used for I/O verification in the future.
Each derived class should provide a version to guard against data model changes during development.
- class pypeit.coadd3d.SlicerIFUCoAdd3D(spec2dfiles, par, skysub_frame=None, sensfile=None, scale_corr=None, grating_corr=None, ra_offsets=None, dec_offsets=None, spectrograph=None, det=1, overwrite=False, show=False, debug=False)[source]
Bases:
CoAdd3D
Child of CoAdd3D for SlicerIFU data reduction. For documentation, see CoAdd3d parent class above.
This child class of the IFU datacube creation performs the series of steps that are specific to slicer-based IFUs, including the following steps
Data preparation:
Loads individual spec2d files
If requested, subtract the sky (either from a dedicated sky frame, or use the sky model stored in the science spec2d file)
The sky regions near the spectral edges of the slits are masked
Apply a relative spectral illumination correction (scalecorr) that registers all input frames to the scale illumination.
Generate a WCS of each individual frame, and calculate the RA and DEC of each individual detector pixel
Calculate the astrometric correction that is needed to align spatial positions along the slices
Compute the differential atmospheric refraction correction
Apply the extinction correction
Apply a grating correction (gratcorr) - This corrects for the relative spectral efficiency of combining data taken with multiple different grating angles
Flux calibrate
Data cube generation:
If frames are not being combined, individual data cubes are generated and saved as a DataCube object. A white light image is also produced, if requested
- If frames are being aligned and/or combined, the following steps are followed:
The output voxel sampling is computed (this must be consistent for all frames)
Frames are aligned (either by user-specified offsets, or by a fancy cross-correlation)
The relative weights to each for each detector pixel is computed
If frames are not being combined, individual DataCube’s will be generated for each frame
If frames are being combined, a single DataCube will be generated.
White light images are also produced, if requested.
- compute_weights()[source]
Compute the relative weights to apply to pixels that are collected into the voxels of the output DataCubes
- Returns:
The individual pixel weights for each detector pixel, and every frame.
- Return type:
- get_alignments(spec2DObj, slits, spat_flexure=None)[source]
Generate and return the spline interpolation fitting functions to be used for the alignment frames, as part of the astrometric correction.
- Parameters:
spec2DObj (
Spec2DObj
) – 2D PypeIt spectra object.slits (
SlitTraceSet
) – Class containing information about the slitsspat_flexure (
float
, optional) – Spatial flexure in pixels
- Returns:
alignSplines – Alignment splines used for the astrometric correction
- Return type:
- load()[source]
This is the main function that loads in the data, and performs several frame-specific corrections. If the user does not wish to align or combine the individual datacubes, then this routine will also produce a spec3d file, which is a DataCube representation of a PypeIt spec2d frame for SlicerIFU data.
This function should be called in the __init__ method, and initialises multiple variables. The variables initialised by this function include:
self.ifu_ra - The RA of the IFU pointing
self.ifu_dec - The Dec of the IFU pointing
self.mnmx_wv - The minimum and maximum wavelengths of every slit and frame.
self._spatscale - The native spatial scales of all spec2d frames.
self._specscale - The native spectral scales of all spec2d frames.
self.weights - Weights to use when combining cubes
self.flat_splines - Spline representations of the blaze function (based on the illumflat).
self.blaze_spline - Spline representation of the reference blaze function
self.blaze_wave - Wavelength array used to construct the reference blaze function
self.blaze_spec - Spectrum used to construct the reference blaze function
As well as the primary arrays that store the pixel information for multiple spec2d frames, including:
self.all_sci
self.all_ivar
self.all_wave
self.all_slitid
self.all_wghts
self.all_tilts
self.all_slits
self.all_align
self.all_wcs
self.all_ra
self.all_dec
self.all_dar
- run()[source]
This is the main routine called to convert PypeIt spec2d files into PypeIt DataCube objects. It is specific to the SlicerIFU data.
First the data are loaded and several corrections are made. These include:
A sky frame or model is subtracted from the science data, and the relative spectral illumination of different slices is corrected.
A mask of good pixels is identified
A common spaxel scale is determined, and the astrometric correction is derived
An RA and Dec image is created for each pixel.
Based on atmospheric conditions, a differential atmospheric refraction correction is applied.
Extinction correction
Flux calibration (optional - this calibration is only applied if a standard star cube is supplied)
If the input frames will not be combined (combine=False) if they won’t be aligned (align=False), then each individual spec2d file is converted into a spec3d file (i.e. a PypeIt DataCube object). These fits files can be loaded/viewed in other software packages to display or combine multiple datacubes into a single datacube. However, note that different software packages use combination algorithms that may not conserve flux, or may produce covariance between adjacent voxels.
If the user wishes to either spatially align multiple exposures (align=True) or combine multiple exposures (combine=True), then the next set of operations include:
Generate white light images of each individual cube (according to a user-specified wavelength range)
Align multiple frames if align=True (either manually by user input, or automatically by cross-correlation)
Create the output WCS, and apply the flux calibration to the data
Generate individual datacubes (combine=False) or one master datacube containing all exposures (combine=True). Note, there are several algorithms used to combine multiple frames. Refer to the subpixellate() routine for more details about the combination options.
- run_align()[source]
This routine aligns multiple cubes by using manual input offsets or by cross-correlating white light images.
- Returns:
A new set of RA values that have been aligned numpy.ndarray: A new set of Dec values that has been aligned
- Return type: