Coadd 3D Spectra
Overview
This document describes how to combine a set of fully reduced 2D spectra from multiple exposures into a single 3D datacube for IFU spectrographs.
This must be done outside of the data reduction pipeline (run_pypeit); i.e., PypeIt will not coadd your spectra as part of the data reduction process.
pypeit_coadd_datacube
The primary script is called pypeit_coadd_datacube
, which takes
an input file to guide the process.
usage
The script usage can be displayed by calling the script with the
-h
option:
$ pypeit_coadd_datacube -h
usage: pypeit_coadd_datacube [-h] [--det DET] [-o] [-v VERBOSITY] file
Read in an array of spec2D files and convert them into a datacube
positional arguments:
file filename.coadd3d file
options:
-h, --help show this help message and exit
--det DET Detector (default: 1)
-o, --overwrite Overwrite any existing files/directories (default:
False)
-v VERBOSITY, --verbosity VERBOSITY
Verbosity level between 0 [none] and 2 [all]. Default:
1. Level 2 writes a log with filename
coadd_datacube_YYYYMMDD-HHMM.log (default: 1)
coadd3d file
The pypeit_coadd_datacube
script requires an
input file to guide the process.
The format of this type of Input File Format
includes a Parameter Block (required)
and a Data Block (required).
In the following example for keck_kcwi
, the coadd3d file will be
saved as BB1245p4238.coadd3d
:
# User-defined execution parameters
[rdx]
spectrograph = keck_kcwi
detnum = 1
[reduce]
[[cube]]
combine = True
align = True
output_filename = BB1245p4238_datacube.fits
save_whitelight = True
# Read in the data
spec2d read
filename | scale_corr
Science/spec2d_scienceframe_01.fits | Science/spec2d_scalecorr.fits
Science/spec2d_scienceframe_02.fits | Science/spec2d_scalecorr.fits
spec2d end
The opening block sets parameters for the reduction steps. Note, by default, pypeit_coadd_datacube
will convert all spec2d files into a spec3d file (i.e. individual datacubes for each exposure).
If you want to combine all exposures into a single datacube, you need to set combine = True
,
as in the above example, and provide an output_filename
. This is very useful if you want to
combine several standard star exposures into a single datacube for flux calibration, for example.
The spec2d block provides a list of Spec2D Output files. You can also specify several optional corrections as part of the spec2d block, including:
scale_corr
: A relative scale correction file that is used to correct the relativespectral sensitivity of the datacube. This relative scale correction ensures that the relative spectral sensitivity of the datacube is constant across the field of view. The spec2d file used for the
scale_corr
column should either be a twilight or dome flat reduced as ascience
frame (see KECK KCWI for a description of what you need to do). In order to use this functionality, you should not reduce your science data with a spectral illumination correction. In other words, in your PypeIt Reduction File file, set the following when you execute run_pypeit:[scienceframe] [[process]] use_specillum = False
grating_corr
: A grating correction file that is used to correct the grating relative sensitivityof individual spec2d files. It is unlikely that you will require this correction. This is only required if you are combining spec2d files that have very slightly different wavelength solutions and you only have a sensitivity function for one of these setups. Otherwise, if you have a sensitivity function for each setup, you should use the
sensfile
option to specify the sensitivity function for each wavelength setup. For further details, see Grating correction.
skysub_frame
: A sky subtraction frame that is used to remove the sky background of the datacube.For further details, see Sky Subtraction.
ra_offset
: A right ascension offset that is used to correct the pointing of the datacube.For further details, see Spatial alignment with different setups.
dec_offset
: A declination offset that is used to correct the pointing of the datacube.For further details, see Spatial alignment with different setups.
sensfile
: A sensitivity function file that is used to correct the absolute sensitivity of the datacube.The required input file is the sensitivity function, which is generated with the
pypeit_sensfunc
script. For further details, see Flux calibration.
run
Then run the script:
pypeit_coadd_datacube BB1245p4238.coadd3d -o
There are several recommended steps of the coadd3d process that can be run separately. These are:
- Step 1 - Create a datacube of your standard star exposures. It is worthwhile noting that the
standard star exposures should be reduced with the same setup as the science exposures. The datacube is then used to flux calibrate the science exposures. The datacube is created by running the following command:
pypeit_coadd_datacube StandardStarName.coadd3d -o
- Step 2 - Extract the 1D spectra from the datacube. This is done by running the following command,
assuming that the output datacube from the previous step was called
StandardStarName.fits
. Thepypeit_extract_datacube
script will produce an output file calledspec1d_StandardStarName.fits
:pypeit_extract_datacube StandardStarName.fits -o
This script is only designed for point sources. Both a boxcar and an optimal extraction are calculated. The boxcar extraction uses a circular aperture with a radius set by the
boxcar_radius
parameter. The optimal extraction uses the white light image as the object profile. Also note that the extraction if performed on the sky-subtracted datacube. Therefore, thespec1d
file contains the 1D spectra including aBOX_COUNTS_SKY
andOPT_COUNTS_SKY
columns. These columns do not contain the sky counts, but rather the residual level of the sky aperture. This is useful for checking the quality of the sky subtraction.
- Step 3 - Generate a sensitivity function from the 1D spectra. This is done by running the following
command, assuming that the output 1D spectra from the previous step was called
spec1d_StandardStarName.fits
. Thepypeit_sensfunc
script will produce an output file calledsens_StandardStarName.fits
:pypeit_sensfunc spec1d_StandardStarName.fits -o sens_StandardStarName.fits
For further details, see Creating a PypeIt Sensitivity Function.
Step 4 - Generate a datacube of the science exposures. This is done by running the following command:
pypeit_coadd_datacube ScienceName.coadd3d -o
Note that you will need to specify the sensitivity function file using the
sensfile
option in the coadd3d file file. For further details, see Flux calibration.
Combination options
PypeIt currently supports two different methods to convert a spec2d frame into a datacube;
these options are called subpixel
(default) and NGP
(which is short for, nearest grid point),
and can be set using the following keyword arguments:
[reduce]
[[cube]]
method = ngp
The default option is called subpixel
, which divides each pixel in the spec2d frame
into many subpixels, and assigns each subpixel to a voxel of the datacube. Flux is conserved,
but voxels are correlated, and the error spectrum does not account for covariance between
adjacent voxels. The subpixellation scale can be separately set in the spatial and spectral
direction on the 2D detector. If you would like to change the subpixellation factors from
the default values (5), you can optionally set (one or all of) the spec_subpixel
,
spat_subpixel
, and slice_subpixel
parameters as follows:
[reduce]
[[cube]]
method = subpixel
spec_subpixel = 3
spat_subpixel = 7
slice_subpixel = 10
The total number of subpixels generated for each detector pixel on the spec2d frame is
spec_subpixel x spat_subpixel x slice_subpixel. The default values (5) divide each spec2d pixel
into 5x5x5=125 subpixels during datacube creation.
spec_subpixel
is the number of subpixels in the spectral
direction (i.e. predominantly detector columns),
spat_subpixel
is the number of subpixels in the
spatial direction (i.e. the long axis of each slice; predominantly along detector rows), and
slice_subpixel
is the number of times to divide each of the
slices (i.e. the short axis of each slice). Note that all three of these subpixel
definitions are perpendicular to each other in the datacube.
While the spec_subpixel
and spat_subpixel
options are somewhat intuitive (i.e. the code is
dividing detector columns and rows into smaller subpixels), the slice_subpixel
option may not be
immediately obvious, so consider the following example. Imagine the long edge of the slice aligned
East-West. The short edge of a single slice will span a Dec difference of 0.35 arcseconds in the case
of the Keck/KCWI Small slicer. slice_subpixel
is effectively dividing this slice width further.
If slice_subpixel=7
then this is creating seven subslices, each of width 0.05 arcseconds. The
importance of this becomes really noticeable when combined with the differential atmospheric
refraction (DAR) correction. Consider two wavelengths that have a relative DAR of 0.15 arcseconds.
In this case, choosing a value of slice_subpixel=1
would shift the relative spatial positions by
0.35 arcseconds (i.e. the difference between adjacent slices) compared to the true difference of 0.15
arcseconds. slice_subpixel
divides the flux of the slice into evenly spaced rectangular bins, and
places each of these into a voxel of the final datacube. In the case of a 0.15 arcsecond shift, this
would mean that \(3/7\) of the flux ends up in one output slice and the remaining \(4/7\) of
the flux ends up in the adjacent slice.
As an alternative to the subpixel
method, you can convert the spec2d frames into a datacube
with the NGP
method. This algorithm is effectively a 3D histogram. This approach is faster
than subpixel
, flux is conserved, and voxels are not correlated. However, this option suffers
the same downsides as any histogram; the choice of bin sizes can change how the datacube appears.
This algorithm takes each pixel on the spec2d frame and puts the flux of this pixel into one voxel
in the datacube. Depending on the binning used, some voxels may be empty (zero flux) while a
neighbouring voxel might contain the flux from two spec2d pixels.
Flux calibration
If you would like to flux calibrate your datacube, you need to
produce your standard star datacube first. Then extract the spectrum
of the standard star using the pypeit_extract_datacube
script. This
will produce a spec1d
file that you will need to use to generate a
sensitivity function in the usual way (see Creating a PypeIt Sensitivity Function).
Then, when generating the datacube of the science frame you must include
the name of the sensitivity function in your coadd3d
file as follows:
[reduce]
[[cube]]
sensfunc = my_sensfunc.fits
Also, an important note for users that wish to combine multiple standard star exposures
into a single datacube: PypeIt currently performs an extinction correction when
generating the sensitivity function; this is perfectly fine for single exposures.
However, if you are combining multiple standard star exposures into a single datacube,
you should note that the extinction correction will be applied at the airmass of the
first standard star exposure listed in the coadd3d
file. This is because the
extinction correction is currently not applied to each individual frame in the datacube.
This control flow will be changed in a future release of PypeIt, but for now, to stay
consistent with the current pipeline, the extinction correction is done in the sensitivity
function algorithms, with the caveat that the standard star exposures are assumed to have
similar airmasses. If you have standard star exposures with significantly different
airmasses, then you should use just one of these exposures to generate the sensitivity
function.
Sky Subtraction
The default behaviour of PypeIt is to subtract the model sky that is derived from the science frame during the reduction. If you would like to turn off sky subtraction, set the following keyword argument (all lowercase):
[reduce]
[[cube]]
skysub_frame = none
If you would like to use a dedicated sky frame for sky subtraction
that is separate from the science frame, then you need to provide
the relative path+file of the spec2d file that you would like to
use. If you need a different sky frame for different science frames,
then you can specify the skysub_frame
in the spec2d
block of the
.coadd3d
file, similar to the way scale_corr
is set in the example
above. If you have dedicated sky frames, then it is generally
recommended to reduce these frames as if they are regular science
frames, but add the following keyword arguments at the top of your
coadd3d file:
[reduce]
[[skysub]]
user_regions = 5:95
[flexure]
spec_method = slitcen
This ensures that the innermost 90 percent of pixels in each slit are used to generate a model of the sky.
Grating correction
The grating correction is needed if any of the data are recorded
with even a very slightly different setup (e.g. data taken on two
different nights with the same intended wavelength coverage,
but the grating angle of the two nights were slightly different).
You can avoid this correction if you generate a sensitivity function
for each of the setups. However, if you have not done this, then
the grating correction is needed. This correction requires that you
have taken calibrations (i.e. flatfields) with the two different
setups, and uses the relative sensitivity of the two flatfields to
estimate the sensitivity correction. By default, the grating correction
will not be applied. If you want to apply the grating correction, you
will need to specify the relative path+file of the Flat calibration
file for each spec2d file. You will need to specify a grating_corr
file for each science frame, in the spec2d
block of the
.coadd3d
file:
# Read in the data
spec2d read
filename | grating_corr
Science/spec2d_scienceframe_01.fits | Calibrations/Flat_A_0_DET01.fits
Science/spec2d_scienceframe_02.fits | Calibrations/Flat_B_1_DET01.fits
spec2d end
If all spec2d files were reduced with the same Flat calibration file, then you do not need to specify the grating correction file. Also, if you generate a sensitivity function for each spec2d file, then you do not need to specify the grating correction file. The grating correction file is only needed if you have one sensitivity function for all spec2d files, even though the spec2d files were acquired with different grating angles.
Astrometric correction
If you would like to perform an astrometric correction, you
need to install scikit-image (version > 0.17;
see Install via pip or simply install scikit-image with pip directly). The default
option is to perform the astrometric correction, if a Alignment
frame has been computed. To disable the astrometric
correction, set the following keyword argument in your coadd3d
file:
[reduce]
[[cube]]
astrometric = False
If a Alignment frame is not available, then the astrometric correction will be based on the slit edges.
White light image
A white light image can be generated for the combined frame, or
for each individual frame if combine=False
, by setting the
save_whitelight
keyword argument. You can set the wavelength
range of the white light image by setting the whitelight_range
keyword argument:
[reduce]
[[cube]]
save_whitelight = True
whitelight_range = 5000,6000
White light images are not produced by default. The output filename for
the white light images are given the suffix _whitelight.fits
.
Spatial alignment with different setups
If you have multiple setups that you want to align so that all
pixels are spatially coincident, you must first produce the
datacube that you wish to use as a reference. Then, define the
WCS parameters using the keyword arguments in your coadd3d
file:
[reduce]
[[cube]]
reference_image = reference_cube_whitelight.fits
ra_min = 191.398441
ra_max = 191.401419
dec_min = 42.634352
dec_max = 42.639988
spatial_delta = 0.339462
where these values are printed as terminal output after
reference_cube.fits
is generated.
Note that PypeIt is not currently setup to stitch together cubes covering different wavelength range, but it can coadd multiple spec2D files into a single datacube if the wavelength setup overlaps, and the spatial positions are very similar.
Combining multiple datacubes
PypeIt is able to combine standard star frames for flux calibration, and should not have any difficulty with this. If your science observations are designed so that there is very little overlap between exposures, you should not assume that the automatic combination algorithm will perform well. Instead, you may prefer to output individual data cubes and manually combine the cubes with some other purpose-built software. If you know the relative offsets very well, then you can specify these, and PypeIt can combine all frames into a single combined datacube. This is the recommended approach, provided that you know the relative offsets of each frame. In the following example, the first cube is assumed to be the reference cube (0.0 offset in both RA and Dec), and the second science frame is offset relative to the first by:
Delta RA x cos(Dec) = 1.0" W
Delta Dec = 2.0" N
The offset convention used in PypeIt is that positive offsets translate the RA and Dec of a frame to higher RA (i.e. more East) and higher Dec (i.e. more North). In the above example, frame 2 is 1” to the West of frame 1, meaning that we need to move frame 2 by 1” to the East (i.e. a correction of +1”). Similarly, we need to more frame 2 by 2” South (i.e. a correction of -2”). Therefore, in the above example, the coadd3d file would look like the following:
# User-defined execution parameters
[rdx]
spectrograph = keck_kcwi
detnum = 1
[reduce]
[[cube]]
combine = True
output_filename = BB1245p4238_datacube.fits
align = True
# Read in the data
spec2d read
filename | ra_offset | dec_offset
Science/spec2d_scienceframe_01.fits | 0.0 | 0.0
Science/spec2d_scienceframe_02.fits | 1.0 | -2.0
spec2d end
Current Coadd3D Data Model
The output is a single fits file that contains a datacube, and
a cube with the same shape that stores the variance. The units
are stored in the FLUXUNIT
header keyword.
Here is a short python script that will allow you to read in and plot a wavelength slice of the cube:
from matplotlib import pyplot as plt
from astropy.visualization import ZScaleInterval, ImageNormalize
from pypeit.coadd3d import DataCube
filename = "datacube.fits"
cube = DataCube.from_file(filename)
flux_cube = cube.flux # Flux datacube
error_cube = cube.sig # Errors associated with each voxel of the flux datacube
ivar_cube = cube.ivar # Inverse variance cube
wcs = cube.wcs
wave_slice = 1000
norm = ImageNormalize(flux_cube[wave_slice,:,:], interval=ZScaleInterval())
fig = plt.figure()
fig.add_subplot(111, projection=wcs, slices=('x', 'y', wave_slice))
plt.imshow(flux_cube[wave_slice,:,:], origin='lower', cmap=plt.cm.viridis, norm=norm)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.show()